Cygnus 未作为服务启动
Cygnus not starting as a service
我一直在检查其他人关于 cygnus 配置文件的问题,但我仍然无法正常工作。
使用 "service cygnus start" 启动 cygnus 失败。
当我尝试启动服务时,/var/log/cygnus/cygnus.log 处的日志显示:
Warning: JAVA_HOME is not set!
+ exec /usr/bin/java -Xmx20m -Dflume.log.file=cygnus.log -cp '/usr/cygnus/conf:/usr/cygnus/lib/*:/usr/cygnus/plugins.d/cygnus/lib/*:/usr/cygnus/plugins.d/cygnus/libext/*' -Djava.library.path= com.telefonica.iot.cygnus.nodes.CygnusApplication -p 8081 -f /usr/cygnus/conf/agent_1.conf -n cygnusagent
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/cygnus/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/cygnus/plugins.d/cygnus/lib/cygnus-0.8.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: ./logs/cygnus.log (No such file or directory)
at java.io.FileOutputStream.openAppend(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:210)
at java.io.FileOutputStream.<init>(FileOutputStream.java:131)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:809)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:735)
at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:615)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:502)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:547)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:483)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:73)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:242)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:254)
at org.apache.flume.node.Application.<clinit>(Application.java:58)
Starting an ordered shutdown of Cygnus
Stopping sources
All the channels are empty
Stopping channels
Stopping hdfs-channel (lyfecycle state=START)
Stopping sinks
Stopping hdfs-sink (lyfecycle state=START)
JAVA_HOME 已设置,我认为问题出在配置文件上:
agent_1.conf:
cygnusagent.sources = http-source
cygnusagent.sinks = hdfs-sink
cygnusagent.channels = hdfs-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = hdfs-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = def_serv
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts gi
# TimestampInterceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# GroupinInterceptor, do not change
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
# ============================================
# OrionHDFSSink configuration
# channel name from where to read notification events
cygnusagent.sinks.hdfs-sink.channel = hdfs-channel
# sink class, must not be changed
cygnusagent.sinks.hdfs-sink.type = com.telefonica.iot.cygnus.sinks.OrionHDFSSink
# Comma-separated list of FQDN/IP address regarding the HDFS Namenode endpoints
# If you are using Kerberos authentication, then the usage of FQDNs instead of IP addresses is mandatory
cygnusagent.sinks.hdfs-sink.hdfs_host = cosmos.lab.fiware.org
# port of the HDFS service listening for persistence operations; 14000 for httpfs, 50070 for webhdfs
cygnusagent.sinks.hdfs-sink.hdfs_port = 14000
# username allowed to write in HDFS
cygnusagent.sinks.hdfs-sink.hdfs_username = MYUSERNAME
# OAuth2 token
cygnusagent.sinks.hdfs-sink.oauth2_token = MYTOKEN
# how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.hdfs-sink.attr_persistence = column
# Hive FQDN/IP address of the Hive server
cygnusagent.sinks.hdfs-sink.hive_host = cosmos.lab.fiware.org
# Hive port for Hive external table provisioning
cygnusagent.sinks.hdfs-sink.hive_port = 10000
# Kerberos-based authentication enabling
cygnusagent.sinks.hdfs-sink.krb5_auth = false
# Kerberos username
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_user = krb5_username
# Kerberos password
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_password = xxxxxxxxxxxxx
# Kerberos login file
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_login_conf_file = /usr/cygnus/conf/krb5_login.conf
# Kerberos configuration file
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_conf_file = /usr/cygnus/conf/krb5.conf
#=============================================
# hdfs-channel configuration
# channel type (must not be changed)
cygnusagent.channels.hdfs-channel.type = memory
# capacity of the channel
cygnusagent.channels.hdfs-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.hdfs-channel.transactionCapacity = 100
和cygnus_instance_1.conf:
CYGNUS_USER=cygnus
CONFIG_FOLDER=/usr/cygnus/conf
CONFIG_FILE=/usr/cygnus/conf/agent_1.conf
# Name of the agent. The name of the agent is not trivial, since it is the base for the Flume parameters
# naming conventions, e.g. it appears in .sources.http-source.channels=...
AGENT_NAME=cygnusagent
# Name of the logfile located at /var/log/cygnus.
LOGFILE_NAME=cygnus.log
# Administration port. Must be unique per instance
ADMIN_PORT=8081
# Polling interval (seconds) for the configuration reloading
POLLING_INTERVAL=30
我希望这是一个简单的问题。如果需要更多信息,请告诉我。
顺便说一句,我按照 this link 上的说明获得了令牌。
访问COSMOS 全局实例不应该有一个密码字段吗?或者token够不够?
谢谢
虽然很长时间没有在 cygnus 上工作太多,但是您提到的问题看起来像是在启动 Cygnus 时您的应用程序无法找到日志目录,服务无法启动。 {这是配置问题。}
要避免同样的情况,您可以执行一些可能有帮助的步骤。
如果日志目录也可以访问,则从 cygnus 的主目录启动 cygnus 服务。
例如,假设您的 cygnus 主目录是“/usr/local/cygnus”并记录“/usr/local/cygnus/logs/”,然后从 cygnus 主目录自身启动 cygnus 服务
"sh /usr/local/cygnus/bin/cygnus start",这些将起作用,因为日志目录将由 cygnus "./log/cygnus.log"
访问
在 ./~bashprofile 中添加 Cygnus Home 并更新导出,因此这些将为 Cygnus Home 目录设置类路径,这也将有助于访问日志位置,以便您可以通过使用启动服务"service cygnus start".
通过提及日志位置的完整路径更新 cygnus 日志记录的配置例如“/usr/local/cygnus/logs/”并启动服务。
因为我不得不再次解决这个问题,所以我会借此机会说说我是如何做到的。
基本上log4j需要配置。
在 cygnus 存储库的安装和配置说明的 this 部分,您可以找到更多详细信息。
在我的例子中,我只使用一个 cygnus 实例来写入 HDFS。为了让 cygnus 正常工作,我将 CYGNUS_PATH/conf/log4j.properties 上的以下行编辑为绝对路径,其中有 cygnus.log 文件:
flume.log.dir=/usr/cygnus/logs
其中 /user/cygnus 是我的 CYGNUS_PATH。
所以现在 log4j 会将 cygnus 的操作写入 /usr/cygnus/logs/cygnus.log。
当然,您可以随意将日志文件放在任何您想要的地方,只要您在此 属性 文件中正确引用它即可。
希望这对遇到类似问题的人有所帮助。
我一直在检查其他人关于 cygnus 配置文件的问题,但我仍然无法正常工作。
使用 "service cygnus start" 启动 cygnus 失败。
当我尝试启动服务时,/var/log/cygnus/cygnus.log 处的日志显示:
Warning: JAVA_HOME is not set!
+ exec /usr/bin/java -Xmx20m -Dflume.log.file=cygnus.log -cp '/usr/cygnus/conf:/usr/cygnus/lib/*:/usr/cygnus/plugins.d/cygnus/lib/*:/usr/cygnus/plugins.d/cygnus/libext/*' -Djava.library.path= com.telefonica.iot.cygnus.nodes.CygnusApplication -p 8081 -f /usr/cygnus/conf/agent_1.conf -n cygnusagent
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/cygnus/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/cygnus/plugins.d/cygnus/lib/cygnus-0.8.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: ./logs/cygnus.log (No such file or directory)
at java.io.FileOutputStream.openAppend(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:210)
at java.io.FileOutputStream.<init>(FileOutputStream.java:131)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:809)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:735)
at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:615)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:502)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:547)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:483)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:73)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:242)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:254)
at org.apache.flume.node.Application.<clinit>(Application.java:58)
Starting an ordered shutdown of Cygnus
Stopping sources
All the channels are empty
Stopping channels
Stopping hdfs-channel (lyfecycle state=START)
Stopping sinks
Stopping hdfs-sink (lyfecycle state=START)
JAVA_HOME 已设置,我认为问题出在配置文件上:
agent_1.conf:
cygnusagent.sources = http-source
cygnusagent.sinks = hdfs-sink
cygnusagent.channels = hdfs-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = hdfs-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = def_serv
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts gi
# TimestampInterceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# GroupinInterceptor, do not change
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
# ============================================
# OrionHDFSSink configuration
# channel name from where to read notification events
cygnusagent.sinks.hdfs-sink.channel = hdfs-channel
# sink class, must not be changed
cygnusagent.sinks.hdfs-sink.type = com.telefonica.iot.cygnus.sinks.OrionHDFSSink
# Comma-separated list of FQDN/IP address regarding the HDFS Namenode endpoints
# If you are using Kerberos authentication, then the usage of FQDNs instead of IP addresses is mandatory
cygnusagent.sinks.hdfs-sink.hdfs_host = cosmos.lab.fiware.org
# port of the HDFS service listening for persistence operations; 14000 for httpfs, 50070 for webhdfs
cygnusagent.sinks.hdfs-sink.hdfs_port = 14000
# username allowed to write in HDFS
cygnusagent.sinks.hdfs-sink.hdfs_username = MYUSERNAME
# OAuth2 token
cygnusagent.sinks.hdfs-sink.oauth2_token = MYTOKEN
# how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.hdfs-sink.attr_persistence = column
# Hive FQDN/IP address of the Hive server
cygnusagent.sinks.hdfs-sink.hive_host = cosmos.lab.fiware.org
# Hive port for Hive external table provisioning
cygnusagent.sinks.hdfs-sink.hive_port = 10000
# Kerberos-based authentication enabling
cygnusagent.sinks.hdfs-sink.krb5_auth = false
# Kerberos username
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_user = krb5_username
# Kerberos password
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_password = xxxxxxxxxxxxx
# Kerberos login file
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_login_conf_file = /usr/cygnus/conf/krb5_login.conf
# Kerberos configuration file
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_conf_file = /usr/cygnus/conf/krb5.conf
#=============================================
# hdfs-channel configuration
# channel type (must not be changed)
cygnusagent.channels.hdfs-channel.type = memory
# capacity of the channel
cygnusagent.channels.hdfs-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.hdfs-channel.transactionCapacity = 100
和cygnus_instance_1.conf:
CYGNUS_USER=cygnus
CONFIG_FOLDER=/usr/cygnus/conf
CONFIG_FILE=/usr/cygnus/conf/agent_1.conf
# Name of the agent. The name of the agent is not trivial, since it is the base for the Flume parameters
# naming conventions, e.g. it appears in .sources.http-source.channels=...
AGENT_NAME=cygnusagent
# Name of the logfile located at /var/log/cygnus.
LOGFILE_NAME=cygnus.log
# Administration port. Must be unique per instance
ADMIN_PORT=8081
# Polling interval (seconds) for the configuration reloading
POLLING_INTERVAL=30
我希望这是一个简单的问题。如果需要更多信息,请告诉我。
顺便说一句,我按照 this link 上的说明获得了令牌。 访问COSMOS 全局实例不应该有一个密码字段吗?或者token够不够?
谢谢
虽然很长时间没有在 cygnus 上工作太多,但是您提到的问题看起来像是在启动 Cygnus 时您的应用程序无法找到日志目录,服务无法启动。 {这是配置问题。}
要避免同样的情况,您可以执行一些可能有帮助的步骤。
如果日志目录也可以访问,则从 cygnus 的主目录启动 cygnus 服务。 例如,假设您的 cygnus 主目录是“/usr/local/cygnus”并记录“/usr/local/cygnus/logs/”,然后从 cygnus 主目录自身启动 cygnus 服务 "sh /usr/local/cygnus/bin/cygnus start",这些将起作用,因为日志目录将由 cygnus "./log/cygnus.log"
访问
在 ./~bashprofile 中添加 Cygnus Home 并更新导出,因此这些将为 Cygnus Home 目录设置类路径,这也将有助于访问日志位置,以便您可以通过使用启动服务"service cygnus start".
通过提及日志位置的完整路径更新 cygnus 日志记录的配置例如“/usr/local/cygnus/logs/”并启动服务。
因为我不得不再次解决这个问题,所以我会借此机会说说我是如何做到的。
基本上log4j需要配置。 在 cygnus 存储库的安装和配置说明的 this 部分,您可以找到更多详细信息。
在我的例子中,我只使用一个 cygnus 实例来写入 HDFS。为了让 cygnus 正常工作,我将 CYGNUS_PATH/conf/log4j.properties 上的以下行编辑为绝对路径,其中有 cygnus.log 文件:
flume.log.dir=/usr/cygnus/logs
其中 /user/cygnus 是我的 CYGNUS_PATH。
所以现在 log4j 会将 cygnus 的操作写入 /usr/cygnus/logs/cygnus.log。 当然,您可以随意将日志文件放在任何您想要的地方,只要您在此 属性 文件中正确引用它即可。
希望这对遇到类似问题的人有所帮助。