『壹』 java怎麼創建hivemetastoreclient
下面就來解釋下系統是如何生成meta client的!
先來看幾段代碼!
publicvoidcreateDatabase(Databasedb,booleanifNotExist)throwsAlreadyExistsException,HiveException{
try{
getMSC().createDatabase(db);
}catch(AlreadyExistsExceptione){
if(!ifNotExist){
throwe;
}
}catch(Exceptione){
thrownewHiveException(e);
}
}
=========
privateIMetaStoreClientgetMSC()throwsMetaException{
if(metaStoreClient==null){
metaStoreClient=createMetaStoreClient();
}
returnmetaStoreClient;
}
=========
()throwsMetaException{
HiveMetaHookLoaderhookLoader=newHiveMetaHookLoader(){
publicHiveMetaHookgetHook(org.apache.hadoop.hive.metastore.api.Tabletbl)throwsMetaException{
try{
if(tbl==null){
returnnull;
}
=HiveUtils.getStorageHandler(conf,
tbl.getParameters().get(META_TABLE_STORAGE));
if(storageHandler==null){
returnnull;
}
returnstorageHandler.getMetaHook();
}catch(HiveExceptionex){
LOG.error(StringUtils.stringifyException(ex));
thrownewMetaException("Failedtoloadstoragehandler:"+ex.getMessage());
}
}
};
returnnewHiveMetaStoreClient(conf,hookLoader);
}
=========
publicHiveMetaStoreClient(HiveConfconf,HiveMetaHookLoaderhookLoader)
throwsMetaException{
this.hookLoader=hookLoader;
if(conf==null){
conf=newHiveConf(HiveMetaStoreClient.class);
}
this.conf=conf;
localMetaStore=conf.getBoolVar(ConfVars.METASTORE_MODE);
if(localMetaStore){
//
//throughthenetwork
client=newHiveMetaStore.HMSHandler("hiveclient",conf);
isConnected=true;
return;
}
//getthenumberretries
retries=HiveConf.getIntVar(conf,HiveConf.ConfVars.METASTORETHRIFTRETRIES);
retryDelaySeconds=conf.getIntVar(ConfVars.METASTORE_CLIENT_CONNECT_RETRY_DELAY);
//
if(conf.getVar(HiveConf.ConfVars.METASTOREURIS)!=null){
StringmetastoreUrisString[]=conf.getVar(
HiveConf.ConfVars.METASTOREURIS).split(",");
metastoreUris=newURI[metastoreUrisString.length];
try{
inti=0;
for(Strings:metastoreUrisString){
URItmpUri=newURI(s);
if(tmpUri.getScheme()==null){
("URI:"+s
+"doesnothaveascheme");
}
metastoreUris[i++]=tmpUri;
}
}catch(IllegalArgumentExceptione){
throw(e);
}catch(Exceptione){
MetaStoreUtils.logAndThrowMetaException(e);
}
}elseif(conf.getVar(HiveConf.ConfVars.METASTOREDIRECTORY)!=null){
metastoreUris=newURI[1];
try{
metastoreUris[0]=newURI(conf
.getVar(HiveConf.ConfVars.METASTOREDIRECTORY));
}catch(URISyntaxExceptione){
MetaStoreUtils.logAndThrowMetaException(e);
}
}else{
LOG.error("NOTgettingurisfromconf");
thrownewMetaException("");
}
//finallyopenthestore
open();
}
下面要認真分析下上面的這段代碼,因為關聯到一些參數的配置,對於理解生產環境的部署參數有幫助!先看下面這段代碼
localMetaStore=conf.getBoolVar(ConfVars.METASTORE_MODE);
if(localMetaStore){
//
//connecting
//throughthenetwork
client=newHiveMetaStore.HMSHandler("hiveclient",conf);
isConnected=true;
return;
}
PS:ConfVars.METASTORE_MODE---METASTORE_MODE("hive.metastore.local",true),
『貳』 如何在Java中執行Hive命令或HiveQL
String sql="show tables; select * from test_tb limit 10";
List<String> command = new ArrayList<String>();
command.add("hive");
command.add("-e");
command.add(sql);
List<String> results = new ArrayList<String>();
ProcessBuilder hiveProcessBuilder = new ProcessBuilder(command);
hiveProcess = hiveProcessBuilder.start();
BufferedReader br = new BufferedReader(new InputStreamReader(
hiveProcess.getInputStream()));
String data = null;
while ((data = br.readLine()) != null) {
results.add(data);
}
『叄』 java中怎麼實現查詢出hive下所有資料庫下表名
try {
Class.forName("org.apache.hadoop.hive.jdbc.HiveDriver");
String selectSql = "select * from db.data where address = '11111111'";
Connection connect = DriverManager.getConnection("jdbc:hive://192.168.xx.xx:10000/db", "xxx", "xxx");
PreparedStatement state = null;
state = connect.prepareStatement(selectSql);
ResultSet resultSet = state.executeQuery();
while (resultSet != null && resultSet.next()) {
System.out.println(resultSet.getString(1) + " " + resultSet.getString(2));
}
} catch (Exception e) {
e.printStackTrace();
}
『肆』 navicat for mysql怎麼連接hive資料庫
navicat 不支持鏈接hive庫,只支持連接hive 的metastore 庫,或者成為元數據。
有問題繼續問我
『伍』 如何在Java中執行Hive命令或HiveQL
Java在1.5過後提供了ProcessBuilder根據運行時環境啟動一個Process調用執行運行時環境下的命令或應用程序(1.5以前使用Runtime),關於ProcessBuilder請參考Java相關文檔。調用代碼如下:
String sql="show tables; select * from test_tb limit 10";
List<String> command = new ArrayList<String>();
command.add("hive");
command.add("-e");
command.add(sql);
List<String> results = new ArrayList<String>();
ProcessBuilder hiveProcessBuilder = new ProcessBuilder(command);
hiveProcess = hiveProcessBuilder.start();
BufferedReader br = new BufferedReader(new InputStreamReader(
hiveProcess.getInputStream()));
String data = null;
while ((data = br.readLine()) != null) {
results.add(data);
}
其中command可以是其它Hive命令,不一定是HiveQL。
『陸』 hive jdbc連接不成功。。報錯org.apache.thrift.transport.TTransportException: Invalid status -128
jdbc和連接池對於你這個場景來說,都足夠,既然用spring管理了,建議還是使用連接池,另外,spring自身沒有實現連接池,一般都是對第三方連接池的包裝,常見的有C3P0,dbcp以及最近比較流行的boneCP等,這幾個配置都差不多太多,以boneCP為例:
<bean id="dataSource" class="com.jolbox.bonecp.BoneCPDataSource"
destroy-method="close">
<property name="driverClass" value="${jdbc.driverClass}" />
<property name="jdbcUrl" value="${jdbc.url}" />
<property name="username" value="${jdbc.user}" />
<property name="password" value="${jdbc.password}" />
<property name="idleConnectionTestPeriod" value="60" />
<property name="idleMaxAge" value="240" />
<property name="maxConnectionsPerPartition" value="30" />
<property name="minConnectionsPerPartition" value="10" />
<property name="partitionCount" value="2" />
<property name="acquireIncrement" value="5" />
<property name="statementsCacheSize" value="100" />
<property name="releaseHelperThreads" value="3" />
</bean>
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSource" />
</bean>