『壹』 java怎么创建hivemetastoreclient
下面就来解释下系统是如何生成meta client的!
先来看几段代码!
publicvoidcreateDatabase(Databasedb,booleanifNotExist)throwsAlreadyExistsException,HiveException{
try{
getMSC().createDatabase(db);
}catch(AlreadyExistsExceptione){
if(!ifNotExist){
throwe;
}
}catch(Exceptione){
thrownewHiveException(e);
}
}
=========
privateIMetaStoreClientgetMSC()throwsMetaException{
if(metaStoreClient==null){
metaStoreClient=createMetaStoreClient();
}
returnmetaStoreClient;
}
=========
()throwsMetaException{
HiveMetaHookLoaderhookLoader=newHiveMetaHookLoader(){
publicHiveMetaHookgetHook(org.apache.hadoop.hive.metastore.api.Tabletbl)throwsMetaException{
try{
if(tbl==null){
returnnull;
}
=HiveUtils.getStorageHandler(conf,
tbl.getParameters().get(META_TABLE_STORAGE));
if(storageHandler==null){
returnnull;
}
returnstorageHandler.getMetaHook();
}catch(HiveExceptionex){
LOG.error(StringUtils.stringifyException(ex));
thrownewMetaException("Failedtoloadstoragehandler:"+ex.getMessage());
}
}
};
returnnewHiveMetaStoreClient(conf,hookLoader);
}
=========
publicHiveMetaStoreClient(HiveConfconf,HiveMetaHookLoaderhookLoader)
throwsMetaException{
this.hookLoader=hookLoader;
if(conf==null){
conf=newHiveConf(HiveMetaStoreClient.class);
}
this.conf=conf;
localMetaStore=conf.getBoolVar(ConfVars.METASTORE_MODE);
if(localMetaStore){
//
//throughthenetwork
client=newHiveMetaStore.HMSHandler("hiveclient",conf);
isConnected=true;
return;
}
//getthenumberretries
retries=HiveConf.getIntVar(conf,HiveConf.ConfVars.METASTORETHRIFTRETRIES);
retryDelaySeconds=conf.getIntVar(ConfVars.METASTORE_CLIENT_CONNECT_RETRY_DELAY);
//
if(conf.getVar(HiveConf.ConfVars.METASTOREURIS)!=null){
StringmetastoreUrisString[]=conf.getVar(
HiveConf.ConfVars.METASTOREURIS).split(",");
metastoreUris=newURI[metastoreUrisString.length];
try{
inti=0;
for(Strings:metastoreUrisString){
URItmpUri=newURI(s);
if(tmpUri.getScheme()==null){
("URI:"+s
+"doesnothaveascheme");
}
metastoreUris[i++]=tmpUri;
}
}catch(IllegalArgumentExceptione){
throw(e);
}catch(Exceptione){
MetaStoreUtils.logAndThrowMetaException(e);
}
}elseif(conf.getVar(HiveConf.ConfVars.METASTOREDIRECTORY)!=null){
metastoreUris=newURI[1];
try{
metastoreUris[0]=newURI(conf
.getVar(HiveConf.ConfVars.METASTOREDIRECTORY));
}catch(URISyntaxExceptione){
MetaStoreUtils.logAndThrowMetaException(e);
}
}else{
LOG.error("NOTgettingurisfromconf");
thrownewMetaException("");
}
//finallyopenthestore
open();
}
下面要认真分析下上面的这段代码,因为关联到一些参数的配置,对于理解生产环境的部署参数有帮助!先看下面这段代码
localMetaStore=conf.getBoolVar(ConfVars.METASTORE_MODE);
if(localMetaStore){
//
//connecting
//throughthenetwork
client=newHiveMetaStore.HMSHandler("hiveclient",conf);
isConnected=true;
return;
}
PS:ConfVars.METASTORE_MODE---METASTORE_MODE("hive.metastore.local",true),
『贰』 如何在Java中执行Hive命令或HiveQL
String sql="show tables; select * from test_tb limit 10";
List<String> command = new ArrayList<String>();
command.add("hive");
command.add("-e");
command.add(sql);
List<String> results = new ArrayList<String>();
ProcessBuilder hiveProcessBuilder = new ProcessBuilder(command);
hiveProcess = hiveProcessBuilder.start();
BufferedReader br = new BufferedReader(new InputStreamReader(
hiveProcess.getInputStream()));
String data = null;
while ((data = br.readLine()) != null) {
results.add(data);
}
『叁』 java中怎么实现查询出hive下所有数据库下表名
try {
Class.forName("org.apache.hadoop.hive.jdbc.HiveDriver");
String selectSql = "select * from db.data where address = '11111111'";
Connection connect = DriverManager.getConnection("jdbc:hive://192.168.xx.xx:10000/db", "xxx", "xxx");
PreparedStatement state = null;
state = connect.prepareStatement(selectSql);
ResultSet resultSet = state.executeQuery();
while (resultSet != null && resultSet.next()) {
System.out.println(resultSet.getString(1) + " " + resultSet.getString(2));
}
} catch (Exception e) {
e.printStackTrace();
}
『肆』 navicat for mysql怎么连接hive数据库
navicat 不支持链接hive库,只支持连接hive 的metastore 库,或者成为元数据。
有问题继续问我
『伍』 如何在Java中执行Hive命令或HiveQL
Java在1.5过后提供了ProcessBuilder根据运行时环境启动一个Process调用执行运行时环境下的命令或应用程序(1.5以前使用Runtime),关于ProcessBuilder请参考Java相关文档。调用代码如下:
String sql="show tables; select * from test_tb limit 10";
List<String> command = new ArrayList<String>();
command.add("hive");
command.add("-e");
command.add(sql);
List<String> results = new ArrayList<String>();
ProcessBuilder hiveProcessBuilder = new ProcessBuilder(command);
hiveProcess = hiveProcessBuilder.start();
BufferedReader br = new BufferedReader(new InputStreamReader(
hiveProcess.getInputStream()));
String data = null;
while ((data = br.readLine()) != null) {
results.add(data);
}
其中command可以是其它Hive命令,不一定是HiveQL。
『陆』 hive jdbc连接不成功。。报错org.apache.thrift.transport.TTransportException: Invalid status -128
jdbc和连接池对于你这个场景来说,都足够,既然用spring管理了,建议还是使用连接池,另外,spring自身没有实现连接池,一般都是对第三方连接池的包装,常见的有C3P0,dbcp以及最近比较流行的boneCP等,这几个配置都差不多太多,以boneCP为例:
<bean id="dataSource" class="com.jolbox.bonecp.BoneCPDataSource"
destroy-method="close">
<property name="driverClass" value="${jdbc.driverClass}" />
<property name="jdbcUrl" value="${jdbc.url}" />
<property name="username" value="${jdbc.user}" />
<property name="password" value="${jdbc.password}" />
<property name="idleConnectionTestPeriod" value="60" />
<property name="idleMaxAge" value="240" />
<property name="maxConnectionsPerPartition" value="30" />
<property name="minConnectionsPerPartition" value="10" />
<property name="partitionCount" value="2" />
<property name="acquireIncrement" value="5" />
<property name="statementsCacheSize" value="100" />
<property name="releaseHelperThreads" value="3" />
</bean>
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSource" />
</bean>