hbase shell操作
command group:
重要:
# 插入数据 表、row_key,列族:列限定符,数据
hbase(main):020:0> put "student","1","info:name","zhangsan"
Took 0.1132 seconds# 修改数据
put "student","1","info:name","donghao"
hbase(main):006:0> scan 'student'
ROW COLUMN+CELL 1 column=info:age, timestamp=1634985751676, value=20 1 column=info:name, timestamp=1635649429816, value=donghao
...# scan查询
hbase(main):030:0> scan "student"
ROW COLUMN+CELL 1 column=info:age, timestamp=1634985751676, value=20 1 column=info:name, timestamp=1634985423241, value=zhangsan 2 column=info:age, timestamp=1634985758246, value=30 2 column=info:name, timestamp=1634985764073, value=lisi
2 row(s)
Took 0.0409 seconds hbase(main):032:0> scan "student",{COLUMNS => "info:name"}
ROW COLUMN+CELL 1 column=info:name, timestamp=1634985423241, value=zhangsan 2 column=info:name, timestamp=1634985764073, value=lisi
2 row(s)
Took 0.0364 secondshbase(main):035:0> scan "student",{COLUMNS => ["info"], TIMERANGE => [1634985758246, 1634985758247]}
ROW COLUMN+CELL 2 column=info:age, timestamp=1634985758246, value=30
1 row(s)
Took 0.0681 seconds# 注意start stop 是左闭右开
hbase(main):059:0> scan "student", {STARTROW => '1', STOPROW => '3'}
ROW COLUMN+CELL 1 column=info:age, timestamp=1634985751676, value=20 1 column=info:name, timestamp=1634985423241, value=zhangsan 2 column=info:age, timestamp=1634985758246, value=30 2 column=info:name, timestamp=1634985764073, value=lisi
2 row(s)
Took 0.0475 seconds
hbase(main):060:0> scan "student"
ROW COLUMN+CELL 1 column=info:age, timestamp=1634985751676, value=20 1 column=info:name, timestamp=1634985423241, value=zhangsan 2 column=info:age, timestamp=1634985758246, value=30 2 column=info:name, timestamp=1634985764073, value=lisi 3 column=info:name, timestamp=1634986673219, value=liwu
3 row(s)
Took 0.0413 seconds# 排序,默认字典顺序
hbase(main):063:0> scan "student"
ROW COLUMN+CELL 1 column=info:age, timestamp=1634985751676, value=20 1 column=info:name, timestamp=1634985423241, value=zhangsan 10 column=info:name, timestamp=1634986961378, value=qck 2 column=info:age, timestamp=1634985758246, value=30 2 column=info:name, timestamp=1634985764073, value=lisi 3 column=info:name, timestamp=1634986673219, value=liwu
4 row(s) scan操作
hbase> scan 'hbase:meta' hbase> scan 'hbase:meta', {COLUMNS => 'info:regioninfo'} hbase> scan 'ns1:t1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'} hbase> scan 't1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'} hbase> scan 't1', {COLUMNS => 'c1', TIMERANGE => [1303668804000, 1303668904000]} hbase> scan 't1', {REVERSED => true} hbase> scan 't1', {ALL_METRICS => true} hbase> scan 't1', {METRICS => ['RPC_RETRIES', 'ROWS_FILTERED']} hbase> scan 't1', {ROWPREFIXFILTER => 'row2', FILTER => " (QualifierFilter (>=, 'binary:xyz')) AND (TimestampsFilter ( 123, 456))"} hbase> scan 't1', {FILTER => org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(1, 0)} hbase> scan 't1', {CONSISTENCY => 'TIMELINE'} For setting the Operation Attributes hbase> scan 't1', { COLUMNS => ['c1', 'c2'], ATTRIBUTES => {'mykey' => 'myvalue'}} hbase> scan 't1', { COLUMNS => ['c1', 'c2'], AUTHORIZATIONS => ['PRIVATE','SECRET']} For experts, there is an additional option -- CACHE_BLOCKS -- which switches block caching for the scanner on (true) or off (false). By default it is enabled. Examples:
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], CACHE_BLOCKS => false}
get hbase(main):066:0> get "student","1",["info:name","info:age"]
COLUMN CELL
info:age timestamp=1634985751676, value=20
info:name timestamp=1634985423241, value=zhangsan
1 row(s)
hbase> get 'ns1:t1', 'r1'
hbase> get 't1', 'r1'
hbase> get 't1', 'r1', {TIMERANGE => [ts1, ts2]}
hbase> get 't1', 'r1', {COLUMN => 'c1'}
hbase> get 't1', 'r1', {COLUMN => ['c1', 'c2', 'c3']}
hbase> get 't1', 'r1', {COLUMN => 'c1', TIMESTAMP => ts1}
hbase> get 't1', 'r1', {COLUMN => 'c1', TIMERANGE => [ts1, ts2], VERSIONS => 4}
hbase> get 't1', 'r1', {COLUMN => 'c1', TIMESTAMP => ts1, VERSIONS => 4}
hbase> get 't1', 'r1', {FILTER => "ValueFilter(=, 'binary:abc')"}
hbase> get 't1', 'r1', 'c1'
hbase> get 't1', 'r1', 'c1', 'c2'
hbase> get 't1', 'r1', ['c1', 'c2']
hbase> get 't1', 'r1', {COLUMN => 'c1', ATTRIBUTES => {'mykey'=>'myvalue'}}
hbase> get 't1', 'r1', {COLUMN => 'c1', AUTHORIZATIONS => ['PRIVATE','SECRET']}
hbase> get 't1', 'r1', {CONSISTENCY => 'TIMELINE'}
hbase> get 't1', 'r1', {CONSISTENCY => 'TIMELINE', REGION_REPLICA_ID => 1}
测试DDL:```bash# 创建表以及列族hbase(main):031:0> create 'studnet'
ERROR: Table must have at least one column family **
For usage try 'help "create"'
Took 0.0113 seconds hbase(main):009:0> create 'student','info'
Created table student
Took 0.9784 seconds
=> Hbase::Table - studenthbase(main):010:0> create 'stu','info1','info2'
Created table stu
Took 0.7822 seconds
=> Hbase::Table - stu
hbase(main):011:0> list
TABLE
stu
student
2 row(s)
Took 0.0255 seconds
=> ["Student", "stu", "student", "test"]# 描述表信息
hbase(main):014:0> describe 'student'
Table student is ENABLED
student
COLUMN FAMILIES DESCRIPTION
{NAME => 'info', VERSIONS => '1'(留几个版本的数据), EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_D
ATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'RO
W', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSIO
N => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'}
1 row(s)
Took 0.3769 seconds
hbase(main):015:0> # 更新版本信息
hbase(main):015:0> alter 'student',{VERSIONS => '2'}
Unknown argument ignored: VERSIONS
Updating all regions with the new schema...
1/1 regions updated.
Done.
Took 2.5945 seconds
hbase(main):016:0> # 命名空间
hbase(main):010:0> create "bigdata:student","info"
Created table bigdata:student
Took 0.8986 seconds
=> Hbase::Table - bigdata:student
hbase(main):011:0> list
TABLE
Student
bigdata:student
stu
test
4 row(s)
Took 0.0284 seconds
=> ["Student", "bigdata:student", "stu", "test"]# 删除namespace
hbase(main):012:0> drop_namespace "bigdata"ERROR: org.apache.hadoop.hbase.constraint.ConstraintException: Only empty namespaces can be removed. Namespace bigdata has 1 tablesat org.apache.hadoop.hbase.master.procedure.DeleteNamespaceProcedure.prepareDelete(DeleteNamespaceProcedure.java:217)at org.apache.hadoop.hbase.master.procedure.DeleteNamespaceProcedure.executeFromState(DeleteNamespaceProcedure.java:78)at org.apache.hadoop.hbase.master.procedure.DeleteNamespaceProcedure.executeFromState(DeleteNamespaceProcedure.java:45)at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:189)at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:965)at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1723)at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1462)at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1200(ProcedureExecutor.java:78)at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2039)For usage try 'help "drop_namespace"'hbase(main):013:0> disable "bigdata:student"
Took 0.9871 seconds
hbase(main):014:0> drop "bigdata:student"
Took 0.3229 seconds
hbase(main):015:0> drop_namespace "bigdata"
Took 0.1680 seconds
本文来自互联网用户投稿,文章观点仅代表作者本人,不代表本站立场,不承担相关法律责任。如若转载,请注明出处。 如若内容造成侵权/违法违规/事实不符,请点击【内容举报】进行投诉反馈!
