sparksql中行转列
进入sparksql
beeline -u "jdbc:hive2://172.16.12.46:10015" -n spark -p spark -d org.apache.hive.jdbc.HiveDriver --color=true --silent=false --fastConnect=false --verbose=true
执行查询
select x.data_number,concat_ws(',',collect_list(cast(x.data_day_max as string))), concat_ws(',',collect_list(cast(x.data_day_hour as string))) from (select data_number,data_day_max,data_day_hour from jt.data_day where data_type='SCT' and data_version in ('SCTS301','SCTS302') and data_number in ('SCTS301-D','SCTS302-D') and data_day_date='2013-11-01' order by data_day_hour) x group by data_number;
若要去重将collect_list换成collect_set
【推荐】还在用 ECharts 开发大屏?试试这款永久免费的开源 BI 工具!
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步