糖尿病康复,内容丰富有趣,生活中的好帮手!
糖尿病康复 > (转)Clouder CDH3B3开始后hadoop.job.ugi不再生效

(转)Clouder CDH3B3开始后hadoop.job.ugi不再生效

时间:2024-01-06 08:18:38

相关推荐

(转)Clouder CDH3B3开始后hadoop.job.ugi不再生效

来源:/wf1982/article/details/673

Clouder CDH3B3开始后hadoop.job.ugi不再生效!

困扰了我好几天的,终于找到了原因。以前公司用的原版hadoop-0.20.2,使用java设置hadoop.job.ugi为正确的hadoop用户和组即可正常访问hdfs并可创建删除等。

更新到CDH3B4后,再这样搞不成,找了很多资料,无有原因。终于找到了 请看:

Thehadoop.job.ugiconfiguration no longer has any effect. Instead, please use theUserGroupInformation.doAsAPI to impersonate other users on a non-secured cluster. (As of CDH3b3)

hadoop.job.ugi配置不再生效。取而代之的,请使用UserGroupInformation.doAs 方法 来使用其他用户操作,这时集群不认为是安全的。

与之前不兼容的更改:

The TaskTracker configuration parametermapreduce.tasktracker.local.cache.numberdirectorieshas been renamed tomapreduce.tasktracker.cache.local.numberdirectories. (As of CDH3u0) The Job-level configuration parametersmapred.max.maps.per.node,mapred.max.reduces.per.node,mapred.running.map.limit, andmapred.running.reduce.limitconfigurations have been removed. (As of CDH3b4) CDH3 no longer contains packages for Debian Lenny, Ubuntu Hardy, Jaunty, or Karmic. Checkoutthese upgrade instructionsif you are using an Ubuntu release past its end of life. If you are using a release for which Cloudera's Debian or RPM packages are not available, you can always use the tarballs from theCDH download page. (As of CDH3b4) Thehadoop.job.ugiconfiguration no longer has any effect. Instead, please use theUserGroupInformation.doAsAPI to impersonate other users on a non-secured cluster. (As of CDH3b3) The UnixUserGroupInformation class has been removed. Please see the new methods in theUserGroupInformationclass. (As of CDH3b3) The resolution of groups for a user is now performed on the server side. For a user's group membership to take effect, it must be visible on the NameNode and JobTracker machines. (As of CDH3b3) Themapred.tasktracker.procfsbasedprocesstree.sleeptime-before-sigkillconfiguration has been renamed tomapred.tasktracker.tasks.sleeptime-before-sigkill. (As of CDH3b3) The HDFS and MapReduce daemons no longer run as a single sharedhadoopuser. Instead, the HDFS daemons run ashdfsand the MapReduce daemons run asmapred. SeeChanges in User Accounts and Groups in CDH3. (As of CDH3b3) Due to a change in the internal compression APIs, CDH3 is incompatible with versions of thehadoop-lzoopen source project prior to 0.4.9. (As of CDH3b3) CDH3 changes the wire format for Hadoop's RPC mechanism. Thus, you must upgrade any existing client software at the same time as the cluster is upgraded. (All versions) Zero values for thedfs.socket.timeoutanddfs.datanode.socket.write.timeoutconfiguration parameters are now respected. Previously zero values for these parameters resulted in a 5 second timeout. (As of CDH3u1) When Hadoop's Kerberos integration is enabled, it is now required that eitherkinitbe on the path for user accounts running the Hadoop client, or that thehadoop.mandconfiguration option be manually set to the absolute path tokinit. (As of CDH3u1)

Hive

The upgrade of Hive from CDH2 to CDH3 requires several manual steps. Please be sure to follow the upgrade guide closely. SeeUpgrading Hive and Hue in CDH3. 地址: /display/CDHDOC/Incompatible+Changes

继续那个问题 ,如何使用 UserGroupInformation.doAs呢?

加入oozie想访问hdfs,但是只有joe可以正常访问hdfs。这是oozie就需要扮成joe。

[java]view plain copy print ?......[java]view plain copy print ?UserGroupInformationugi= UserGroupInformation.createProxyUser(user,UserGroupInformation.getLoginUser()); ugi.doAs(newPrivilegedExceptionAction<Void>(){ publicVoidrun()throwsException{ //Submitajob JobClientjc=newJobClient(conf); jc.submitJob(conf); //ORaccesshdfs FileSystemfs=FileSystem.get(conf); fs.mkdir(someFilePath); } }

需要在 namenode and jobtracker 上配置如下:

[html]view plain copy print ?<property> <name>hadoop.proxyuser.oozie.groups</name> <value>group1,group2</value> <description>Allowthesuperuseroozietoimpersonateanymembersofthegroupgroup1andgroup2</description> </property> <property> <name>hadoop.proxyuser.oozie.hosts</name> <value>host1,host2</value> <description>Thesuperusercanconnectonlyfromhost1andhost2toimpersonateauser</description> </property> 如果没有配置的话,不会成功。

Caveats

The superuser must have kerberos credentials to be able to impersonate another user. It cannot use delegation tokens for this feature. It would be wrong if superuser adds its own delegation token to the proxy user ugi, as it will allow the proxy user to connect to the service with the privileges of the superuser.

However, if the superuser does want to give a delegation token to joe, it must first impersonate joe and get a delegation token for joe, in the same way as the code example above, and add it to the ugi of joe. In this way the delegation token will have the owner as joe.

Secure Impersonation using UserGroupInformation.doAs详细讲解 请见

/common/docs/stable/Secure_Impersonation.html

按照上面的话,javacode 访问hadoop 去正常操作,需要实现kerberos 认证,且配置,采用UserGroupInformation.doAs 方式。

如果不这样做,应用必须要在hadoop用户下才可以正常操作了?!

如果觉得《(转)Clouder CDH3B3开始后hadoop.job.ugi不再生效》对你有帮助,请点赞、收藏,并留下你的观点哦!

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。