浏览器访问 HDFS 页面时,出现以下问题

    xiaoxiao2024-02-19  141

    浏览器访问 HDFS 页面时,出现以下问题

    HTTP ERROR 500

    Problem accessing /nn_browsedfscontent.jsp. Reason:

        Can't browse the DFS since there are no live nodes available to redirect to.

    Caused by:

    java.io.IOException: Can't browse the DFS since there are no live nodes available to redirect to.

    at org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.redirectToRandomDataNode(NamenodeJspHelper.java:646)

    at org.apache.hadoop.hdfs.server.namenode.nn_005fbrowsedfscontent_jsp._jspService(nn_005fbrowsedfscontent_jsp.java:70)

    at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)

    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)

    at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)

    at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)

    at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)

    at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)

    at org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1081)

    at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)

    at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)

    at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)

    at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)

    at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)

    at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)

    at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)

    at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)

    at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)

    at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)

    at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)

    at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)

    at org.mortbay.jetty.Server.handle(Server.java:326)

    at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)

    at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)

    at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)

    at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)

    at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)

    at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)

    at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)

    问题可能出现原因: 

    datanode 服务未启动成功

    问题排查方案:

    使用 jps 查看发现没有 datanode 进程

    问题datanode 日志:

    java.io.IOException: Incompatible clusterIDs in /home/hdfs/data: namenode clusterID = CID-4923bb76-3ceb-424d-a794-85e608f18307; datanode clusterID = CID-9487a0bb-c768-4673-a654-73dee9e1028e at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664) at java.lang.Thread.run(Thread.java:745)

    问题解决思路:

    出现以上 ERROR表示在 hdfs/data 目录下得 namenode 得 clusterID 和 datanode 下的 id 不一致造成的

    问题解决方案:

    1.删除 hadoop 临时目录下的所有文件重新格式化 namenode (不推荐此操作,由于 hdfs 中可能存在有原来的数据,如果执行此操作,会删除所有数据)

    2.将 hadoop 临时目录下的 data 和 name 目录下的 current 文件夹下的 VERSION 中得 clusterID 修改为一致即可

    相关资源:JavaWeb操作hadoop2.6 HDFS,从页面上传,下载,列表展示的demo
    最新回复(0)