如何解决使用 Python 读取和写入 HDFS 时的代理错误?

How to resolve proxy error in reading & writing to HDFS using Python?

我有一个 HDFS,我想使用 Python 脚本对其进行读写。

import requests
import json
import os
import kerberos
import sys

node = os.getenv("namenode").split(",")
print (node)

local_file_path = sys.argv[1]
remote_file_path = sys.argv[2]
read_or_write = sys.argv[3]
print (local_file_path,remote_file_path)

def check_node_status(node):
    for name in node:
        print (name)
        request = requests.get("%s/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"%name,
                               verify=False).json()
        status = request["beans"][0]["State"]
        if status =="active":
            nnhost = request["beans"][0]["HostAndPort"]
            splitaddr = nnhost.split(":")
            nnaddress = splitaddr[0]
            print(nnaddress)
            break
    return status,name,nnaddress

def kerberos_auth(nnaddress):
    __, krb_context = kerberos.authGSSClientInit("HTTP@%s"%nnaddress)
    kerberos.authGSSClientStep(krb_context, "")
    negotiate_details = kerberos.authGSSClientResponse(krb_context)
    headers = {"Authorization": "Negotiate " + negotiate_details,
                "Content-Type":"application/binary"}
    return headers

def kerberos_hdfs_upload(status,name,headers):
    print("running upload function")
    if status =="active":
        print("if function")
        data=open('%s'%local_file_path, 'rb').read()
        write_req = requests.put("%s/webhdfs/v1%s?op=CREATE&overwrite=true"%(name,remote_file_path),
                                 headers=headers,
                                 verify=False, 
                                 allow_redirects=True,
                                 data=data)
        print(write_req.text)

def kerberos_hdfs_read(status,name,headers):
    if status == "active":
        read = requests.get("%s/webhdfs/v1%s?op=OPEN"%(name,remote_file_path),
                            headers=headers,
                            verify=False,
                            allow_redirects=True)

        if read.status_code == 200:
            data=open('%s'%local_file_path, 'wb')
            data.write(read.content)
            data.close()
        else : 
            print(read.content)


status, name, nnaddress= check_node_status(node)
headers = kerberos_auth(nnaddress)
if read_or_write == "write":
    kerberos_hdfs_upload(status,name,headers)
elif read_or_write == "read":
    print("fun")
    kerberos_hdfs_read(status,name,headers)

代码在我自己的机器上运行,不在任何代理之后。但是当 运行 在代理服务器后面的办公机器上使用它时,它会给出以下代理错误:

$ python3 python_hdfs.py ./1.png /user/testuser/2018-02-07_1.png write
['https://<servername>:50470', 'https:// <servername>:50470']
./1.png /user/testuser/2018-02-07_1.png
https://<servername>:50470
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 555, in urlopen
    self._prepare_proxy(conn)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 753, in _prepare_proxy
    conn.connect()
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 230, in connect
    self._tunnel()
  File "/usr/lib/python3.5/http/client.py", line 832, in _tunnel
    message.strip()))
OSError: Tunnel connection failed: 504 Unknown Host

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 376, in send
    timeout=timeout
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 610, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 273, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
requests.packages.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='<servername>', port=50470): Max retries exceeded with url: /jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 504 Unknown Host',)))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "python_hdfs.py", line 68, in <module>
    status, name, nnaddress= check_node_status(node)
  File "python_hdfs.py", line 23, in check_node_status
    verify=False).json()
  File "/usr/lib/python3/dist-packages/requests/api.py", line 67, in get
    return request('get', url, params=params, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/api.py", line 53, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 468, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 576, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 437, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='<server_name>', port=50470): Max retries exceeded with url: /jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 504 Unknown Host',)))

我尝试在代码中提供代理信息,如下所示:

proxies = {
"http": "<proxy_username>:<proxy_password>@<proxy_IP>:<proxy_port>",
"https": "<proxy_username>:<proxy_password>@<proxy_IP>:<proxy_port>",
}

node = os.getenv("namenode").split(",")
print (node)
local_file_path = sys.argv[1]
remote_file_path = sys.argv[2]
print (local_file_path,remote_file_path)


local_file_path = sys.argv[1]
remote_file_path = sys.argv[2]
read_or_write = sys.argv[3]
print (local_file_path,remote_file_path)

def check_node_status(node):
        for name in node:
                print (name)
                request = requests.get("%s/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"%name,proxies=proxies,
                                                           verify=False).json()
                status = request["beans"][0]["State"]
                if status =="active":
                        nnhost = request["beans"][0]["HostAndPort"]
                        splitaddr = nnhost.split(":")
                        nnaddress = splitaddr[0]
                        print(nnaddress)
                        break
        return status,name,nnaddress
### Rest of the code is the same

现在出现以下错误:

$ python3 python_hdfs.py ./1.png /user/testuser/2018-02-07_1.png write
['https://<servername>:50470', 'https:// <servername>:50470']
./1.png /user/testuser/2018-02-07_1.png
https://<servername>:50470
Traceback (most recent call last):
  File "python_hdfs.py", line 73, in <module>
    status, name, nnaddress= check_node_status(node)
  File "python_hdfs.py", line 28, in check_node_status
    verify=False).json()
  File "/usr/lib/python3/dist-packages/requests/api.py", line 67, in get
    return request('get', url, params=params, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/api.py", line 53, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 468, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 576, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 343, in send
    conn = self.get_connection(request.url, proxies)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 254, in get_connection
    proxy_manager = self.proxy_manager_for(proxy)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 160, in proxy_manager_for
    **proxy_kwargs)
  File "/usr/lib/python3/dist-packages/urllib3/poolmanager.py", line 281, in proxy_from_url
    return ProxyManager(proxy_url=url, **kw)
  File "/usr/lib/python3/dist-packages/urllib3/poolmanager.py", line 232, in __init__
    raise ProxySchemeUnknown(proxy.scheme)
requests.packages.urllib3.exceptions.ProxySchemeUnknown: Not supported proxy scheme <proxy_username>

所以,我的问题是,我是否需要在 kerberos 中设置代理才能正常工作?如果是这样,如何?我对kerberos不太熟悉。我 运行 kinit 在 运行 宁 python 代码之前,为了进入 kerberos 领域,运行 没问题并连接到适当的 HDFS 服务器没有代理。所以我不知道为什么在读取或写入相同的HDFS服务器时会出现此错误。感谢任何帮助。

我也在 /etc/apt/apt.conf 中设置了代理,如下所示:

Acquire::http::proxy  "http://<proxy_username>:<proxy_password>@<proxy_IP>:<proxy_port>/";
Acquire::https::proxy "https://<proxy_username>:<proxy_password>@<proxy_IP>:<proxy_port>/";

我也试过以下方法:

$ export http_proxy="http://<user>:<pass>@<proxy>:<port>"
$ export HTTP_PROXY="http://<user>:<pass>@<proxy>:<port>"

$ export https_proxy="http://<user>:<pass>@<proxy>:<port>"
$ export HTTPS_PROXY="http://<user>:<pass>@<proxy>:<port>"

import os

proxy = 'http://<user>:<pass>@<proxy>:<port>'

os.environ['http_proxy'] = proxy 
os.environ['HTTP_PROXY'] = proxy
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy

#rest of the code is same

但错误仍然存​​在。

更新:我也尝试了以下方法。

  1. 有人建议我们已经在 /etc/apt/apt.conf 中设置了代理以连接到网络。但也许我们不需要代理来连接到 HDFS。因此,再次尝试在 /etc/apt/apt.conf 和 运行 脚本中注释代理。我做到了。

    $ 环境 | grep代理 http_proxy=http://hfli:Test6969@192.168.44.217:8080 https_proxy=https://hfli:Test6969@192.168.44.217:8080 $未设置http_proxy $未设置https_proxy $ 环境 | grep代理 $

和 运行 python 脚本 - (i) 没有在 python 脚本中定义代理,并且 (ii) 在 python 中定义了代理] 脚本。我在这两种情况下都遇到了相同的原始代理错误。

  1. 我发现以下 Java 程序可以访问 HDFS 上的 运行 Java 程序:

    进口com.sun.security.auth.callback.TextCallbackHandler; 导入 org.apache.hadoop.fs.FSDataOutputStream; 导入 org.apache.hadoop.fs.FileSystem; 导入 org.apache.hadoop.fs.Path; 导入 java.io.BufferedReader; 导入 java.io.InputStreamReader; 导入 javax.security.auth.Subject; 导入 javax.security.auth.login.LoginContext;

    进口org.apache.hadoop.conf.Configuration; 导入 org.apache.hadoop.security.UserGroupInformation;

    public class HDFS_RW_Secure { public static void main(String[] args) 抛出异常 { System.setProperty("java.security.auth.login.config", "/tmp/sc3_temp/hadoop_kdc.txt"); System.setProperty("java.security.krb5.conf", "/tmp/sc3_temp/hadoop_krb.txt");
    配置 hadoopConf= new Configuration(); //本例使用密码登录,可以改为使用Keytab登录 登录上下文 lc; 主题主题; lc = new LoginContext("JaasSample", new TextCallbackHandler()); lc.login(); System.out.println("login");

        subject = lc.getSubject();
        UserGroupInformation.setConfiguration(hadoopConf);
        UserGroupInformation ugi = UserGroupInformation.getUGIFromSubject(subject);
        UserGroupInformation.setLoginUser(ugi); 
    
        Path pt=new Path("hdfs://edhcluster"+args[0]);
    
        FileSystem fs = FileSystem.get(hadoopConf);
    
        //write
        FSDataOutputStream fin = fs.create(pt);
        fin.writeUTF("Hello!");
        fin.close();
    
        BufferedReader br=new BufferedReader(new InputStreamReader(fs.open(pt)));
        String line;
        line=br.readLine();
        while (line != null)
        {
               System.out.println(line);
               line=br.readLine();
        }
        fs.close();
        System.out.println("This is the end.");
    

    } }

我们需要获取它的 jar 文件,HDFS.jar,和 运行 下面的 shell 脚本来使 Java 个程序在 运行 HDFS.

nano run.sh
# contents of the run.sh file:
/tmp/sc3_temp/jre1.8.0_161/bin/java -Djavax.net.ssl.trustStore=/tmp/sc3_temp/cacerts -Djavax.net.ssl.trustStorePassword=changeit -jar /tmp/sc3_temp/HDFS.jar 

所以,我可以 运行 这个 shell 脚本以 /user/testuser 作为参数,让它可以访问 HDFS 中的 运行 Java 程序:

./run.sh /user/testuser/test2

给出以下输出:

Debug is  true storeKey false useTicketCache false useKeyTab false doNotPrompt false ticketCache is null isInitiator true KeyTab is null refreshKrb5Config is false principal is null tryFirstPass is false useFirstPass is false storePass is false clearPass is false
Kerberos username [testuser]: testuser
Kerberos password for testuser: 
        [Krb5LoginModule] user entered username: testuser

principal is testuser@KRB.REALM
Commit Succeeded 

login
2018-02-08 14:09:30,020 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Hello!
This is the end.

所以,我想这是可行的。但是我如何编写等同于 运行 Python 代码的 shell 脚本?

我找到了解决方案。事实证明,我找错地方了。似乎用户帐户设置错误。我尝试做一些更简单的事情,比如将网页下载到服务器中。我注意到它正在下载页面,但没有修复它的权限。因此,我进行了更多探索,发现在创建用户帐户时,并未为其分配适当的所有权。因此,一旦我将正确的所有者分配给用户帐户,代理错误就消失了。 (唉,浪费了这么多时间。)

我已经写得更详细了