skip to Main Content

I have Ubuntu installed. I have already set all of these but I get error.



export JAVA_HOME=/home/imran/.sdkman/candidates/java/current 
export HDFS_NAMENODE_USER="root"
export HDFS_DATANODE_USER="root"
export HDFS_SECONDARYNAMENODE_USER="root"
export YARN_RESOURCEMANAGER_USER="root"
export YARN_NODEMANAGER_USER="root"

Error:

imran@Imran:~/Downloads/Compressed/hadoop-3.3.5/sbin$ ./start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as imran in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [localhost]
ERROR: namenode can only be executed by root.
Starting datanodes
ERROR: datanode can only be executed by root.
Starting secondary namenodes [Imran]
ERROR: secondarynamenode can only be executed by root.
Starting resourcemanager
ERROR: resourcemanager can only be executed by root.
Starting nodemanagers
ERROR: nodemanager can only be executed by root.
imran@Imran:~/Downloads/Compressed/hadoop-3.3.5/sbin$ sudo ./start-all.sh
[sudo] password for imran: 
Starting namenodes on [localhost]
localhost: root@localhost: Permission denied (publickey,password).
Starting datanodes
localhost: root@localhost: Permission denied (publickey,password).
Starting secondary namenodes [Imran]
Imran: root@imran: Permission denied (publickey,password).
Starting resourcemanager
Starting nodemanagers
localhost: root@localhost: Permission denied (publickey,password).

I tried running these command but i get the eror that permission denied.

2

Answers


  1. start-all.sh script will start all hadoop daemons (yarn, hdfs, historyserver) etc. In all nodes of your cluster. It will check in your slave, master file to get ip of master and slave nodes. First it try to launch namenode in your localhost probably your master file will have localhost inside it. As you have exported HDFS_NAMENODE_USER=root it will try to start namenode with root user name but linux user is imran so giving error. Either you should change environment variable to imran or you should run start-all with linux user root. Or you can consider auth_to_local rules also you can get more info on it on official hadoop page.

    In the case of remote execution it will try to ssh to remote node and will try to execute commands to start hadoop daemons. So ssh will require password. You can set authorized keys to remove interactive password prompt. As you have set environmanet variable for data node as root it will try to ssh with root. May be in this case root ssh login is not permited thats why it is giving error.

    Login or Signup to reply.
  2. The permissions are for SSH, not Unix passwords

    Read your first two warnings

    WARNING: Attempting to start all Apache Hadoop daemons as imran in 10 seconds.
    WARNING: This is not a recommended production deployment configuration.
    

    Notice it’s not using root string you’ve set… But yet you see can only be executed by root

    Then read the Hadoop documentation that states 1) Create a unique Unix user account for each service, never run processes as root – especially since YARN can run untrusted code 2) That you must create a password-less SSH key and distribute it amongst all nodes before starting Hadoop. You can use ssh-copy-id÷command for this. 3) Hadoop is insecure, by default, and doesn’t use passwords. For securing a cluster, you would use Kerberos certificates

    Also, start-all is not a production script to use. Rather, you’d use hdfs --daemon start namenode, for example, ideally started with systemctl rather than directly called.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search