phoenix 入门

http://phoenix.apache.org/Phoenix-in-15-minutes-or-less.html

 

Blah, blah, blah - I just want to get started!
Ok, great! Just follow our install instructions:

  • //下载然后解压
  • download and expand our installation tar
  • 复制phoenix-4.9.0-HBase-1.2-server.jar 到region server所在节点的hbase/lib目录下,如果配置了backup-master name master节点也要复制一份.
  • copy the phoenix server jar that is compatible with your HBase installation into the lib directory of every region server
  • 重启 region servers,或者三台都重启 
  • restart the region servers
  • //暂时没有做,sqlline.py  mini1:2181  (zk)  如果连上了就成功了
  • add the phoenix client jar to the classpath of your HBase client
  • download and setup SQuirrel as your SQL client so you can issue adhoc SQL against your HBase cluster

I don’t want to download and setup anything else!
Ok, fair enough - you can create your own SQL scripts and execute them using our command line tool instead. Let’s walk through an example now. Begin by navigating to the bin/directory of your Phoenix install location.

  • First, let’s create a us_population.sql file, containing a table definition:
CREATE TABLE IF NOT EXISTS us_population (
      state CHAR(2) NOT NULL,
      city VARCHAR NOT NULL,
      population BIGINT
      CONSTRAINT my_pk PRIMARY KEY (state, city));
  • Now let’s create a us_population.csv file containing some data to put in that table:
NY,New York,8143197
CA,Los Angeles,3844829
IL,Chicago,2842518
TX,Houston,2016582
PA,Philadelphia,1463281
AZ,Phoenix,1461575
TX,San Antonio,1256509
CA,San Diego,1255540
TX,Dallas,1213825
CA,San Jose,912332
  • And finally, let’s create a us_population_queries.sql file containing a query we’d like to run on that data.
SELECT state as "State",count(city) as "City Count",sum(population) as "Population Sum"
FROM us_population
GROUP BY state
ORDER BY sum(population) DESC;
  • Execute the following command from a command terminal
./psql.py <your_zookeeper_quorum> us_population.sql us_population.csv us_population_queries.sql

Congratulations! You’ve just created your first Phoenix table, inserted data into it, and executed an aggregate query with just a few lines of code in 15 minutes or less!

Big deal - 10 rows! What else you got?
Ok, ok - tough crowd. Check out our bin/performance.py script to create as many rows as you want, for any schema you come up with, and run timed queries against it.

Why is it called Phoenix anyway? Did some other project crash and burn and this is the next generation?
I’m sorry, but we’re out of time and space, so we’ll have to answer that next time!

posted @ 2017-08-09 10:03  牵牛花  阅读(270)  评论(0编辑  收藏  举报