让 Hazelcast Native Client 与 Hibernate 5 一起工作。2.x
Getting Hazelcast Native Client working with Hibernate 5.2.x
我正在尝试让 Hazelcast 与 Hibernate 一起工作,但除非我使用 super_client
选项,否则它不会启动。
根据文档,仅当您的应用位于同一 RAC 或数据中心时才应使用超级客户端。对于本地来说是这样,对于生产来说,它们肯定是分开的,所以 Native Client 是唯一适合我们的选择。
Super Client is a member of the cluster, it has socket connection to
every member in the cluster and it knows where the data is so it will
get to the data much faster. But Super Client has the clustering
overhead and it must be on the same data center even on the same RAC.
However Native client is not member and relies on one of the cluster
members. Native Clients can be anywhere in the LAN or WAN. It scales
much better and overhead is quite less. So if your clients are less
than Hazelcast nodes then Super client can be an option; otherwise
definitely try Native Client. As a rule of thumb: Try Native client
first, if it doesn't perform well enough for you, then consider Super
client.
启动 Hazelcast 的最佳选择似乎是使用 Docker:
docker pull hazelcast/hazelcast:3.10.4
docker run --name=hazelcast -d=true -p 5701:5701 hazelcast/hazelcast:3.10.4
这就是它在 运行 启动后的样子,我仔细检查了 Hazelcast 端口 5701 是否暴露了,很明显。
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
77a5a0bed5eb hazelcast/hazelcast:3.10.4 "bash -c 'set -euo p…" 3 days ago Up 6 hours 0.0.0.0:5701->5701/tcp hazelcast
docker 中心文档还提到了如何传入 JAVA_OPTS,我不确定这是必需的还是可选的,以及它的目的是什么,但这并没有帮助我得到起来 运行ning:
-e JAVA_OPTS="-Dhazelcast.local.publicAddress=127.0.0.1:5701"
telnet 127.0.0.1 5701
成功连接到 localhost:5701
,所以我知道端口是打开的。 docker 文档没有提到这个 运行ning Hazelcast 实例的默认密码是什么,我的假设是它是空的或者密码是 dev-pass
,如几个较早的教程中提到的那样。
我正在使用 Hibernate 5.2.13.Final
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-core</artifactId>
<version>${hibernate.version}</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-entitymanager</artifactId>
<version>${hibernate.version}</version>
<exclusions>
<exclusion>
<groupId>cglib</groupId>
<artifactId>cglib</artifactId>
</exclusion>
<exclusion>
<groupId>dom4j</groupId>
<artifactId>dom4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-validator</artifactId>
<version>${hibernate-validator.version}</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-c3p0</artifactId>
<version>${hibernate.version}</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-java8</artifactId>
<version>${hibernate.version}</version>
</dependency>
对于 Hazelcast,根据文档,需要两个依赖项,
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast</artifactId>
<version>3.10.4</version>
</dependency>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-hibernate52</artifactId>
<version>1.2.3</version>
</dependency>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-client</artifactId>
<version>3.10.4</version>
</dependency>
文档显示以下链接:
单击 Hibernate 5 显示 hazelcast-hibernate52
是正确的依赖关系
当我单击 See here
了解详细信息时,我看到了一些看起来有些过时的文档:
假设只有一个拼写错误,我将转到 example:
<!DOCTYPE hibernate-configuration SYSTEM "http://www.hibernate.org/dtd/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session-factory>
<property name="hibernate.cache.use_second_level_cache">true</property>
<property name="hibernate.cache.use_query_cache">false</property>
<property name="hibernate.cache.use_minimal_puts">true</property>
<property name="hibernate.cache.region.factory_class">com.hazelcast.hibernate.HazelcastCacheRegionFactory</property>
<property name="hibernate.cache.hazelcast.use_native_client">false</property>
<property name="hibernate.cache.hazelcast.native_client_hosts">127.0.0.1</property>
<property name="hibernate.cache.hazelcast.native_client_group">hibernate</property>
<property name="hibernate.cache.hazelcast.native_client_password">password</property>
<property name="hibernate.connection.driver_class">org.apache.derby.jdbc.EmbeddedDriver</property>
<property name="hibernate.connection.url">jdbc:derby:hibernateDB</property>
<mapping resource="Employee.hbm.xml"/>
</session-factory>
</hibernate-configuration>
在示例中,Use Native Client 设置为 false,但正在配置,这是错字还是正确的配置?
我正在使用 C3P0 设置的标准 Hibernate Postgres 上尝试这些设置,这是我的 persistence.xml
<properties>
<!-- Hibernate Config -->
<property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQL95Dialect" />
<property name="hibernate.generate_statistics" value="false" />
<property name="hibernate.hbm2ddl.auto" value="validate"/>
<property name="hibernate.physical_naming_strategy" value="za.co.convirt.util.CustomApplicationNamingStrategy"/>
<property name="hibernate.connection.charSet" value="UTF-8"/>
<property name="hibernate.show_sql" value="false" />
<property name="hibernate.format_sql" value="false"/>
<property name="hibernate.use_sql_comments" value="false"/>
<!-- JDBC Config -->
<property name="javax.persistence.jdbc.driver" value="org.postgresql.Driver" />
<property name="javax.persistence.jdbc.time_zone" value="UTC" />
<property name="hibernate.jdbc.time_zone" value="UTC"/>
<!-- Connection Pool -->
<property name="hibernate.connection.provider_class" value="org.hibernate.connection.C3P0ConnectionProvider" />
<property name="hibernate.c3p0.max_size" value="5" />
<property name="hibernate.c3p0.min_size" value="1" />
<property name="hibernate.c3p0.acquire_increment" value="1" />
<property name="hibernate.c3p0.idle_test_period" value="300" />
<property name="hibernate.c3p0.max_statements" value="0" />
<property name="hibernate.c3p0.timeout" value="100" />
<!-- Batch writing -->
<property name="hibernate.jdbc.batch_size" value = "50"/>
<property name="hibernate.order_updates" value = "true"/>
<property name="hibernate.jdbc.batch_versioned_data" value = "true"/>
</properties>
一些参数是通过编程输入的(这已经使用了很长时间,所以我知道它有效,但在这里添加它以防万一它有助于使代码更清晰)
fun paramsFromArgs(args: Array<String>): Map<String, String> {
val hibernateMap = mutableMapOf<String, String>()
args.forEach {
if (it.isNotBlank()) {
if (it.startsWith("hibernate") || it.startsWith("javax.persistence")) {
val split = it.split("=", limit = 2)
hibernateMap.put(split.get(0), split.get(1))
}
}
}
return hibernateMap
}
现在,当我使用 Hazelcast 设置二级缓存时:
paramsDefault.add("hibernate.cache.use_query_cache=true")
paramsDefault.add("hibernate.cache.use_second_level_cache=true")
paramsDefault.add("hibernate.cache.region.factory_class=com.hazelcast.hibernate.HazelcastCacheRegionFactory")
paramsDefault.add("hibernate.cache.provider_configuration_file_resource_path=hazelcast.xml")
paramsDefault.add("hibernate.cache.hazelcast.use_native_client=false")
paramsDefault.add("hibernate.cache.hazelcast.native_client_address=127.0.0.1")
paramsDefault.add("hibernate.cache.hazelcast.native_client_group=dev")
paramsDefault.add("hibernate.cache.hazelcast.native_client_password=dev-pass22222asfasdf")
paramsDefault.add("hibernate.cache.hazelcast.client.statistics.enabled=true")
Database.setupEntityManagerFactory("default",
Database.paramsFromArgs(paramsDefault.toTypedArray()))
像示例中那样将 use_native_client
设置为 false 似乎没有任何作用,日志处于调试模式,我没有看到任何与 Hazelcast 相关的内容。
将它切换到 true
(考虑到它被配置为具有密码和 IP 地址,这更有意义,它会在启动时爆炸。
hibernate.cache.hazelcast.use_native_client=true
hibernate.cache.hazelcast.native_client_address=127.0.0.1
hibernate.cache.hazelcast.native_client_group=dev
hibernate.cache.hazelcast.native_client_password=dev-pass
DEB [16:18:26.531] setup org.hibernate.jpa.internal.util.LogHelper PersistenceUnitInfo [
name: default
persistence provider classname: org.hibernate.jpa.HibernatePersistenceProvider
classloader: null
excludeUnlistedClasses: false
JTA datasource: null
Non JTA datasource: null
Transaction type: RESOURCE_LOCAL
PU root URL: file:/Users/vlad/Code/.../...
Shared Cache Mode: null
Validation Mode: null
Jar files URLs []
Managed classes names []
Mapping files names []
Properties [
...
hibernate.jdbc.time_zone: UTC
javax.persistence.jdbc.password:
hibernate.cache.region.factory_class: com.hazelcast.hibernate.HazelcastCacheRegionFactory
hibernate.c3p0.idle_test_period: 300
hibernate.cache.hazelcast.use_native_client: true
...
hibernate.cache.hazelcast.native_client_group: dev
...
javax.persistence.jdbc.driver: org.postgresql.Driver
hibernate.use_sql_comments: false
hibernate.cache.hazelcast.native_client_address: 127.0.0.1
...
hibernate.cache.hazelcast.client.statistics.enabled: true
hibernate.dialect: org.hibernate.dialect.PostgreSQL95Dialect
hibernate.cache.provider_configuration_file_resource_path: hazelcast.xml]
根据日志,正在使用 HazelcastCacheRegionFactory:
DEB [16:18:26.884] setup org.hibernate.cache.internal.RegionFactoryInitiator
Cache region factory : com.hazelcast.hibernate.HazelcastCacheRegionFactory
后跟两个不符合我的日志记录标准的日志条目(我猜它没有使用 SLF4j?):
Sep 12, 2018 2:18:29 PM com.hazelcast.hibernate.HazelcastCacheRegionFactory
INFO: Starting up HazelcastCacheRegionFactory
...然后无法构建 Hibernate 会话工厂:
ERR [16:18:29.802] setup ApplicationApi [PersistenceUnit: default] Unable to build Hibernate SessionFactory
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.persistenceException (EntityManagerFactoryBuilderImpl.java:970)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build (EntityManagerFactoryBuilderImpl.java:895)
at org.hibernate.jpa.HibernatePersistenceProvider.createEntityManagerFactory (HibernatePersistenceProvider.java:58)
at javax.persistence.Persistence.createEntityManagerFactory (Persistence.java:55)
at za.co.convirt.util.Database.setupEntityManagerFactory (Database.kt:20)
at za.co.convirt.util.Database.setupEntityManagerFactory$default (Database.kt:19)
at ApplicationApi$main$hibernateThread.invoke (ApplicationApi.kt:171)
at ApplicationApi$main$hibernateThread.invoke (ApplicationApi.kt:26)
at kotlin.concurrent.ThreadsKt$thread$thread.run (Thread.kt:30)
为确保它不会因缺少实体注释而失败,我向实体添加了一些 @Cache
注释,但这没有任何区别。
@Table
@Entity
@EntityListeners(AuditListener::class)
@PersistenceContext(unitName = "default")
@Cache(usage = CacheConcurrencyStrategy.READ_WRITE, region = "Seat")
class Seat(
name: String,
...
我还添加了一个hazelcast.xml
,不知道是否需要:
<hazelcast
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.hazelcast.com/schema/config
http://www.hazelcast.com/schema/config/hazelcast-config-3.11.xsd">
<services enable-defaults="true"/>
</hazelcast>
是否支持 hibernate 5.2.x? (this ticket 表明 Hibernate 5.2 的问题已修复,所以我的假设是它应该可以工作)
我想 运行 在一台服务器上创建一个独立的 Hazelcast 实例,并让应用程序的多个实例将其用作中央缓存位置,我在设置中缺少什么才能使其正常工作?
更新 1:
我已经编写了一小段代码成功连接到本地 hazelcast 实例(这是在我的开发机器上,与其余代码相同)
import com.hazelcast.client.HazelcastClient
import com.hazelcast.client.config.ClientConfig
import java.util.*
fun main(args: Array<String>) {
val config = ClientConfig()
config.getNetworkConfig().addAddress("127.0.0.1:5701")
val hazelcastInstance = HazelcastClient.newHazelcastClient(config)
val map = hazelcastInstance.getMap<String, String>("blah")
map.forEach { t, u ->
println(" $t -> $u ")
}
map.put("${Random().nextInt()}", "${Random().nextInt()}")
hazelcastInstance.shutdown()
}
为了证明它正在从缓存中存储和检索,我多次重新启动 main 方法,每次 blah
中的条目数都会增加
Run1:
No printlns
Run2:
1498523740 -> -1418154711
Run3:
1498523740 -> -1418154711
-248583979 -> -940621527
所以 Hazelcast 工作正常...
更新 2:
我现在可以从 Hibernate 连接 hazelcast,但它会在每次必须执行的查找时抛出异常。
从我的类路径中删除 hazelcast.xml
,然后删除组和密码选项,Hibernate 正在启动和连接。
paramsDefault.add("hibernate.cache.use_query_cache=true")
paramsDefault.add("hibernate.cache.use_second_level_cache=true")
paramsDefault.add("hibernate.cache.region.factory_class=com.hazelcast.hibernate.HazelcastCacheRegionFactory")
paramsDefault.add("hibernate.cache.hazelcast.use_native_client=true")
paramsDefault.add("hibernate.cache.hazelcast.native_client_address=127.0.0.1")
输出:
Sep 13, 2018 6:02:37 PM com.hazelcast.hibernate.HazelcastCacheRegionFactory
INFO: Starting up HazelcastCacheRegionFactory
Sep 13, 2018 6:02:37 PM com.hazelcast.core.LifecycleService
INFO: hz.client_0 [dev] [3.10.4] HazelcastClient 3.10.4 (20180727 - 0f51fcf) is STARTING
Sep 13, 2018 6:02:38 PM com.hazelcast.client.spi.ClientInvocationService
INFO: hz.client_0 [dev] [3.10.4] Running with 2 response threads
Sep 13, 2018 6:02:38 PM com.hazelcast.core.LifecycleService
INFO: hz.client_0 [dev] [3.10.4] HazelcastClient 3.10.4 (20180727 - 0f51fcf) is STARTED
Sep 13, 2018 6:02:38 PM com.hazelcast.client.connection.ClientConnectionManager
INFO: hz.client_0 [dev] [3.10.4] Trying to connect to [127.0.0.1]:5701 as owner member
Sep 13, 2018 6:02:38 PM com.hazelcast.client.connection.ClientConnectionManager
INFO: hz.client_0 [dev] [3.10.4] Setting ClientConnection{alive=true, connectionId=1, channel=NioChannel{/127.0.0.1:61191->/127.0.0.1:5701}, remoteEndpoint=[127.0.0.1]:5701, lastReadTime=2018-09-13 18:02:38.356, lastWriteTime=2018-09-13 18:02:38.352, closedTime=never, lastHeartbeatRequested=never, lastHeartbeatReceived=never, connected server version=3.10.4} as owner with principal ClientPrincipal{uuid='532bf500-e03e-4620-a9c2-14bb55c07166', ownerUuid='2fb66fa1-a17f-49fe-ba2b-bf585d43906d'}
Sep 13, 2018 6:02:38 PM com.hazelcast.client.connection.ClientConnectionManager
INFO: hz.client_0 [dev] [3.10.4] Authenticated with server [127.0.0.1]:5701, server version:3.10.4 Local address: /127.0.0.1:61191
Sep 13, 2018 6:02:38 PM com.hazelcast.client.spi.impl.ClientMembershipListener
INFO: hz.client_0 [dev] [3.10.4]
Members [1] {
Member [127.0.0.1]:5701 - 2fb66fa1-a17f-49fe-ba2b-bf585d43906d
}
Sep 13, 2018 6:02:38 PM com.hazelcast.core.LifecycleService
INFO: hz.client_0 [dev] [3.10.4] HazelcastClient 3.10.4 (20180727 - 0f51fcf) is CLIENT_CONNECTED
Sep 13, 2018 6:02:38 PM com.hazelcast.internal.diagnostics.Diagnostics
INFO: hz.client_0 [dev] [3.10.4] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
但是,正在检索的任何调用 Hazelcast 的实体都会停止。
我已经用 JAVA_OPTS
重新启动了 Hazelcast,看看它是否有所不同,看起来不像:
docker run --name=hazelcast -d=true -p 5701:5701 -e JAVA_OPTS="-Dhazelcast.local.publicAddress=127.0.0.1:5701" hazelcast/hazelcast:3.10.4
使用以下方法深入研究 Hazelcast 日志:
docker logs -f hazelcast
我看到了以下内容:
Sep 13, 2018 6:02:11 PM com.hazelcast.client.ClientEndpointManager
INFO: [127.0.0.1]:5701 [dev] [3.10.4] Destroying ClientEndpoint{connection=Connection[id=2, /172.17.0.2:5701->/172.17.0.1:56514, endpoint=[172.17.0.1]:56514, alive=false, type=JAVA_CLIENT], principal='ClientPrincipal{uuid='d8a9b730-c5fd-458c-9ab6-671aece99305', ownerUuid='2fb66fa1-a17f-49fe-ba2b-bf585d43906d'}, ownerConnection=true, authenticated=true, clientVersion=3.10.4, creationTime=1536861657874, latest statistics=null}
Sep 13, 2018 6:02:38 PM com.hazelcast.nio.tcp.TcpIpAcceptor
INFO: [127.0.0.1]:5701 [dev] [3.10.4] Accepting socket connection from /172.17.0.1:56516
Sep 13, 2018 6:02:38 PM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [127.0.0.1]:5701 [dev] [3.10.4] Established socket connection between /172.17.0.2:5701 and /172.17.0.1:56516
Sep 13, 2018 6:02:38 PM com.hazelcast.client.impl.protocol.task.AuthenticationMessageTask
INFO: [127.0.0.1]:5701 [dev] [3.10.4] Received auth from Connection[id=3, /172.17.0.2:5701->/172.17.0.1:56516, endpoint=null, alive=true, type=JAVA_CLIENT], successfully authenticated, principal: ClientPrincipal{uuid='532bf500-e03e-4620-a9c2-14bb55c07166', ownerUuid='2fb66fa1-a17f-49fe-ba2b-bf585d43906d'}, owner connection: true, client version: 3.10.4
Sep 13, 2018 6:03:11 PM com.hazelcast.transaction.TransactionManagerService
INFO: [127.0.0.1]:5701 [dev] [3.10.4] Committing/rolling-back live transactions of client, UUID: d8a9b730-c5fd-458c-9ab6-671aece99305
命中缓存后:
Sep 13, 2018 6:05:43 PM com.hazelcast.map.impl.operation.EntryOperation
SEVERE: [127.0.0.1]:5701 [dev] [3.10.4] java.lang.ClassNotFoundException: org.hibernate.cache.spi.entry.StandardCacheEntryImpl
com.hazelcast.nio.serialization.HazelcastSerializationException: java.lang.ClassNotFoundException: org.hibernate.cache.spi.entry.StandardCacheEntryImpl
at com.hazelcast.internal.serialization.impl.JavaDefaultSerializers$JavaSerializer.read(JavaDefaultSerializers.java:86)
at com.hazelcast.internal.serialization.impl.JavaDefaultSerializers$JavaSerializer.read(JavaDefaultSerializers.java:75)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:48)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.readObject(AbstractSerializationService.java:269)
at com.hazelcast.internal.serialization.impl.ByteArrayObjectDataInput.readObject(ByteArrayObjectDataInput.java:574)
at com.hazelcast.hibernate.serialization.Value.readData(Value.java:78)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.readInternal(DataSerializableSerializer.java:160)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:106)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:51)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:48)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toObject(AbstractSerializationService.java:187)
at com.hazelcast.query.impl.CachedQueryEntry.getValue(CachedQueryEntry.java:75)
at com.hazelcast.hibernate.distributed.LockEntryProcessor.process(LockEntryProcessor.java:49)
at com.hazelcast.hibernate.distributed.LockEntryProcessor.process(LockEntryProcessor.java:32)
at com.hazelcast.map.impl.operation.EntryOperator.process(EntryOperator.java:319)
at com.hazelcast.map.impl.operation.EntryOperator.operateOnKeyValueInternal(EntryOperator.java:182)
at com.hazelcast.map.impl.operation.EntryOperator.operateOnKey(EntryOperator.java:167)
at com.hazelcast.map.impl.operation.EntryOperation.runVanilla(EntryOperation.java:384)
at com.hazelcast.map.impl.operation.EntryOperation.call(EntryOperation.java:188)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:202)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:191)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.run(OperationExecutorImpl.java:406)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.runOrExecute(OperationExecutorImpl.java:433)
at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvokeLocal(Invocation.java:581)
at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:566)
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:525)
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:215)
at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:60)
at com.hazelcast.client.impl.protocol.task.AbstractPartitionMessageTask.processMessage(AbstractPartitionMessageTask.java:67)
at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.initializeAndProcessMessage(AbstractMessageTask.java:123)
at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.doRun(AbstractMessageTask.java:111)
at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.run(AbstractMessageTask.java:101)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:155)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:125)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:100)
Caused by: java.lang.ClassNotFoundException: org.hibernate.cache.spi.entry.StandardCacheEntryImpl
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at com.hazelcast.nio.ClassLoaderUtil.tryLoadClass(ClassLoaderUtil.java:173)
at com.hazelcast.nio.ClassLoaderUtil.loadClass(ClassLoaderUtil.java:147)
at com.hazelcast.nio.IOUtil$ClassLoaderAwareObjectInputStream.resolveClass(IOUtil.java:615)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1866)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1749)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2040)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
at com.hazelcast.internal.serialization.impl.JavaDefaultSerializers$JavaSerializer.read(JavaDefaultSerializers.java:82)
... 34 more
我是否需要在我的 Hazelcast Docker 设置中包含某种 JAR,或者这里发生了什么?
您似乎正试图使用来自 docker 网络之外的另一台服务器的环回地址。
您可能想尝试桥接以消除 docker 网络地址转换。
此外,由于 0.0.0.0 已绑定,因此所有 IP 地址都应安装 Hazelcast 侦听器。
我会简化并首先验证 Hazelcast。如果您有企业,则使用控制台应用程序,否则编写一个简单的启动服务器 java main。然后尝试使用真实 IP 地址连接客户端。一旦这有效,然后进行休眠配置。
终于让它工作了,这是需要的:
1:确保你拥有所有这三个依赖项,最初我错过了第一个,但由于某种原因,没有像预期的那样得到 ClassNotFound 异常。它似乎不是 hazelcast-client
或 hazelcast-hibernate52
的传递依赖
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast</artifactId>
<version>${hazelcast.version}</version>
</dependency>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-hibernate52</artifactId>
<version>${hazelcast-hibernate.version}</version>
</dependency>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-client</artifactId>
<version>${hazelcast.version}</version>
</dependency>
2:如果您的 Hazelcast 开发实例没有密码,请不要指定密码。 127.0.0.1 工作正常,开发时无需 运行 在外部服务器上。
paramsDefault.add("hibernate.cache.use_query_cache=true")
paramsDefault.add("hibernate.cache.use_second_level_cache=true")
paramsDefault.add("hibernate.cache.region.factory_class=com.hazelcast.hibernate.HazelcastCacheRegionFactory")
paramsDefault.add("hibernate.cache.hazelcast.use_native_client=true")
paramsDefault.add("hibernate.cache.hazelcast.native_client_address=127.0.0.1")
// paramsDefault.add("hibernate.cache.hazelcast.native_client_group=$ENV")
// paramsDefault.add("hibernate.cache.hazelcast.native_client_password=dev-pass")
3:删除 hazelcast.xml
- 删除 hazelcast.xml
文件后,Hibernate 实际上启动了,即使我的 hazelcast.xml
文件只有一行代码说使用默认值配置
4: 确保所有实体都标记为可序列化,否则实体将不会缓存并导致 Hazelcast 服务器本身出现异常。
@Table
@Entity
@BatchSize(size = 50)
@PersistenceContext(unitName = "default")
@Cache(usage = CacheConcurrencyStrategy.READ_WRITE, region = "Tag")
class Tag(
name: String,
) : Serializable {
5:如果您的实体内部有 @OneToMany
、@ManyToOne
或其他实体,请确保这些实体也是 Serializable
。
6:编写一个小脚本以确保您的实体正在缓存:
import com.hazelcast.client.HazelcastClient
import com.hazelcast.client.config.ClientConfig
fun main(args: Array<String>) {
val config = ClientConfig()
config.getNetworkConfig().addAddress("127.0.0.1:5701")
val hazelcastInstance = HazelcastClient.newHazelcastClient(config)
val map = hazelcastInstance.getMap<Any, Any>("Tag")
println("=================")
map.forEach { t, u ->
println(" $t -> $u ")
}
println("=================")
hazelcastInstance.shutdown()
}
以上脚本将println
当前缓存中的所有Tag实体
7:启动Docker实例时,确保你已经暴露了端口,没有-p
选项,没有任何作用。
docker run --name=hazelcast -d=true -p 5701:5701 -e JAVA_OPTS="-Dhazelcast.local.publicAddress=127.0.0.1:5701" hazelcast/hazelcast:3.10.4
8:检查 Hazelcast 日志以查看您的 Java / Kotlin 客户端是否正在连接:
docker logs -f hazelcast
当有连接时,您应该会看到类似这样的内容:
Sep 13, 2018 9:05:06 PM com.hazelcast.client.ClientEndpointManager
INFO: [127.0.0.1]:5701 [dev] [3.10.4] Destroying ClientEndpoint{connection=Connection[id=32, /172.17.0.2:5701->/172.17.0.1:56574, endpoint=[172.17.0.1]:56574, alive=false, type=JAVA_CLIENT], principal='ClientPrincipal{uuid='99cbf1b4-d11c-462d-bd87-4c069bc9b2ef', ownerUuid='2fb66fa1-a17f-49fe-ba2b-bf585d43906d'}, ownerConnection=true, authenticated=true, clientVersion=3.10.4, creationTime=1536872631771, latest statistics=null}
Sep 13, 2018 9:05:19 PM com.hazelcast.nio.tcp.TcpIpAcceptor
INFO: [127.0.0.1]:5701 [dev] [3.10.4] Accepting socket connection from /172.17.0.1:56576
Sep 13, 2018 9:05:19 PM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [127.0.0.1]:5701 [dev] [3.10.4] Established socket connection between /172.17.0.2:5701 and /172.17.0.1:56576
Sep 13, 2018 9:05:19 PM com.hazelcast.client.impl.protocol.task.AuthenticationMessageTask
INFO: [127.0.0.1]:5701 [dev] [3.10.4] Received auth from Connection[id=33, /172.17.0.2:5701->/172.17.0.1:56576, endpoint=null, alive=true, type=JAVA_CLIENT], successfully authenticated, principal: ClientPrincipal{uuid='ff51de39-fd9c-4ecf-bdd4-bbdb6ec6c79e', ownerUuid='2fb66fa1-a17f-49fe-ba2b-bf585d43906d'}, owner connection: true, client version: 3.10.4
这些似乎是让 hazelcast 休眠正常工作的关键。
我正在尝试让 Hazelcast 与 Hibernate 一起工作,但除非我使用 super_client
选项,否则它不会启动。
根据文档,仅当您的应用位于同一 RAC 或数据中心时才应使用超级客户端。对于本地来说是这样,对于生产来说,它们肯定是分开的,所以 Native Client 是唯一适合我们的选择。
Super Client is a member of the cluster, it has socket connection to every member in the cluster and it knows where the data is so it will get to the data much faster. But Super Client has the clustering overhead and it must be on the same data center even on the same RAC. However Native client is not member and relies on one of the cluster members. Native Clients can be anywhere in the LAN or WAN. It scales much better and overhead is quite less. So if your clients are less than Hazelcast nodes then Super client can be an option; otherwise definitely try Native Client. As a rule of thumb: Try Native client first, if it doesn't perform well enough for you, then consider Super client.
启动 Hazelcast 的最佳选择似乎是使用 Docker:
docker pull hazelcast/hazelcast:3.10.4
docker run --name=hazelcast -d=true -p 5701:5701 hazelcast/hazelcast:3.10.4
这就是它在 运行 启动后的样子,我仔细检查了 Hazelcast 端口 5701 是否暴露了,很明显。
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
77a5a0bed5eb hazelcast/hazelcast:3.10.4 "bash -c 'set -euo p…" 3 days ago Up 6 hours 0.0.0.0:5701->5701/tcp hazelcast
docker 中心文档还提到了如何传入 JAVA_OPTS,我不确定这是必需的还是可选的,以及它的目的是什么,但这并没有帮助我得到起来 运行ning:
-e JAVA_OPTS="-Dhazelcast.local.publicAddress=127.0.0.1:5701"
telnet 127.0.0.1 5701
成功连接到 localhost:5701
,所以我知道端口是打开的。 docker 文档没有提到这个 运行ning Hazelcast 实例的默认密码是什么,我的假设是它是空的或者密码是 dev-pass
,如几个较早的教程中提到的那样。
我正在使用 Hibernate 5.2.13.Final
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-core</artifactId>
<version>${hibernate.version}</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-entitymanager</artifactId>
<version>${hibernate.version}</version>
<exclusions>
<exclusion>
<groupId>cglib</groupId>
<artifactId>cglib</artifactId>
</exclusion>
<exclusion>
<groupId>dom4j</groupId>
<artifactId>dom4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-validator</artifactId>
<version>${hibernate-validator.version}</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-c3p0</artifactId>
<version>${hibernate.version}</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-java8</artifactId>
<version>${hibernate.version}</version>
</dependency>
对于 Hazelcast,根据文档,需要两个依赖项,
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast</artifactId>
<version>3.10.4</version>
</dependency>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-hibernate52</artifactId>
<version>1.2.3</version>
</dependency>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-client</artifactId>
<version>3.10.4</version>
</dependency>
文档显示以下链接:
单击 Hibernate 5 显示 hazelcast-hibernate52
是正确的依赖关系
当我单击 See here
了解详细信息时,我看到了一些看起来有些过时的文档:
假设只有一个拼写错误,我将转到 example:
<!DOCTYPE hibernate-configuration SYSTEM "http://www.hibernate.org/dtd/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session-factory>
<property name="hibernate.cache.use_second_level_cache">true</property>
<property name="hibernate.cache.use_query_cache">false</property>
<property name="hibernate.cache.use_minimal_puts">true</property>
<property name="hibernate.cache.region.factory_class">com.hazelcast.hibernate.HazelcastCacheRegionFactory</property>
<property name="hibernate.cache.hazelcast.use_native_client">false</property>
<property name="hibernate.cache.hazelcast.native_client_hosts">127.0.0.1</property>
<property name="hibernate.cache.hazelcast.native_client_group">hibernate</property>
<property name="hibernate.cache.hazelcast.native_client_password">password</property>
<property name="hibernate.connection.driver_class">org.apache.derby.jdbc.EmbeddedDriver</property>
<property name="hibernate.connection.url">jdbc:derby:hibernateDB</property>
<mapping resource="Employee.hbm.xml"/>
</session-factory>
</hibernate-configuration>
在示例中,Use Native Client 设置为 false,但正在配置,这是错字还是正确的配置?
我正在使用 C3P0 设置的标准 Hibernate Postgres 上尝试这些设置,这是我的 persistence.xml
<properties>
<!-- Hibernate Config -->
<property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQL95Dialect" />
<property name="hibernate.generate_statistics" value="false" />
<property name="hibernate.hbm2ddl.auto" value="validate"/>
<property name="hibernate.physical_naming_strategy" value="za.co.convirt.util.CustomApplicationNamingStrategy"/>
<property name="hibernate.connection.charSet" value="UTF-8"/>
<property name="hibernate.show_sql" value="false" />
<property name="hibernate.format_sql" value="false"/>
<property name="hibernate.use_sql_comments" value="false"/>
<!-- JDBC Config -->
<property name="javax.persistence.jdbc.driver" value="org.postgresql.Driver" />
<property name="javax.persistence.jdbc.time_zone" value="UTC" />
<property name="hibernate.jdbc.time_zone" value="UTC"/>
<!-- Connection Pool -->
<property name="hibernate.connection.provider_class" value="org.hibernate.connection.C3P0ConnectionProvider" />
<property name="hibernate.c3p0.max_size" value="5" />
<property name="hibernate.c3p0.min_size" value="1" />
<property name="hibernate.c3p0.acquire_increment" value="1" />
<property name="hibernate.c3p0.idle_test_period" value="300" />
<property name="hibernate.c3p0.max_statements" value="0" />
<property name="hibernate.c3p0.timeout" value="100" />
<!-- Batch writing -->
<property name="hibernate.jdbc.batch_size" value = "50"/>
<property name="hibernate.order_updates" value = "true"/>
<property name="hibernate.jdbc.batch_versioned_data" value = "true"/>
</properties>
一些参数是通过编程输入的(这已经使用了很长时间,所以我知道它有效,但在这里添加它以防万一它有助于使代码更清晰)
fun paramsFromArgs(args: Array<String>): Map<String, String> {
val hibernateMap = mutableMapOf<String, String>()
args.forEach {
if (it.isNotBlank()) {
if (it.startsWith("hibernate") || it.startsWith("javax.persistence")) {
val split = it.split("=", limit = 2)
hibernateMap.put(split.get(0), split.get(1))
}
}
}
return hibernateMap
}
现在,当我使用 Hazelcast 设置二级缓存时:
paramsDefault.add("hibernate.cache.use_query_cache=true")
paramsDefault.add("hibernate.cache.use_second_level_cache=true")
paramsDefault.add("hibernate.cache.region.factory_class=com.hazelcast.hibernate.HazelcastCacheRegionFactory")
paramsDefault.add("hibernate.cache.provider_configuration_file_resource_path=hazelcast.xml")
paramsDefault.add("hibernate.cache.hazelcast.use_native_client=false")
paramsDefault.add("hibernate.cache.hazelcast.native_client_address=127.0.0.1")
paramsDefault.add("hibernate.cache.hazelcast.native_client_group=dev")
paramsDefault.add("hibernate.cache.hazelcast.native_client_password=dev-pass22222asfasdf")
paramsDefault.add("hibernate.cache.hazelcast.client.statistics.enabled=true")
Database.setupEntityManagerFactory("default",
Database.paramsFromArgs(paramsDefault.toTypedArray()))
像示例中那样将 use_native_client
设置为 false 似乎没有任何作用,日志处于调试模式,我没有看到任何与 Hazelcast 相关的内容。
将它切换到 true
(考虑到它被配置为具有密码和 IP 地址,这更有意义,它会在启动时爆炸。
hibernate.cache.hazelcast.use_native_client=true hibernate.cache.hazelcast.native_client_address=127.0.0.1 hibernate.cache.hazelcast.native_client_group=dev hibernate.cache.hazelcast.native_client_password=dev-pass
DEB [16:18:26.531] setup org.hibernate.jpa.internal.util.LogHelper PersistenceUnitInfo [
name: default
persistence provider classname: org.hibernate.jpa.HibernatePersistenceProvider
classloader: null
excludeUnlistedClasses: false
JTA datasource: null
Non JTA datasource: null
Transaction type: RESOURCE_LOCAL
PU root URL: file:/Users/vlad/Code/.../...
Shared Cache Mode: null
Validation Mode: null
Jar files URLs []
Managed classes names []
Mapping files names []
Properties [
...
hibernate.jdbc.time_zone: UTC
javax.persistence.jdbc.password:
hibernate.cache.region.factory_class: com.hazelcast.hibernate.HazelcastCacheRegionFactory
hibernate.c3p0.idle_test_period: 300
hibernate.cache.hazelcast.use_native_client: true
...
hibernate.cache.hazelcast.native_client_group: dev
...
javax.persistence.jdbc.driver: org.postgresql.Driver
hibernate.use_sql_comments: false
hibernate.cache.hazelcast.native_client_address: 127.0.0.1
...
hibernate.cache.hazelcast.client.statistics.enabled: true
hibernate.dialect: org.hibernate.dialect.PostgreSQL95Dialect
hibernate.cache.provider_configuration_file_resource_path: hazelcast.xml]
根据日志,正在使用 HazelcastCacheRegionFactory:
DEB [16:18:26.884] setup org.hibernate.cache.internal.RegionFactoryInitiator
Cache region factory : com.hazelcast.hibernate.HazelcastCacheRegionFactory
后跟两个不符合我的日志记录标准的日志条目(我猜它没有使用 SLF4j?):
Sep 12, 2018 2:18:29 PM com.hazelcast.hibernate.HazelcastCacheRegionFactory
INFO: Starting up HazelcastCacheRegionFactory
...然后无法构建 Hibernate 会话工厂:
ERR [16:18:29.802] setup ApplicationApi [PersistenceUnit: default] Unable to build Hibernate SessionFactory
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.persistenceException (EntityManagerFactoryBuilderImpl.java:970)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build (EntityManagerFactoryBuilderImpl.java:895)
at org.hibernate.jpa.HibernatePersistenceProvider.createEntityManagerFactory (HibernatePersistenceProvider.java:58)
at javax.persistence.Persistence.createEntityManagerFactory (Persistence.java:55)
at za.co.convirt.util.Database.setupEntityManagerFactory (Database.kt:20)
at za.co.convirt.util.Database.setupEntityManagerFactory$default (Database.kt:19)
at ApplicationApi$main$hibernateThread.invoke (ApplicationApi.kt:171)
at ApplicationApi$main$hibernateThread.invoke (ApplicationApi.kt:26)
at kotlin.concurrent.ThreadsKt$thread$thread.run (Thread.kt:30)
为确保它不会因缺少实体注释而失败,我向实体添加了一些 @Cache
注释,但这没有任何区别。
@Table
@Entity
@EntityListeners(AuditListener::class)
@PersistenceContext(unitName = "default")
@Cache(usage = CacheConcurrencyStrategy.READ_WRITE, region = "Seat")
class Seat(
name: String,
...
我还添加了一个hazelcast.xml
,不知道是否需要:
<hazelcast
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.hazelcast.com/schema/config
http://www.hazelcast.com/schema/config/hazelcast-config-3.11.xsd">
<services enable-defaults="true"/>
</hazelcast>
是否支持 hibernate 5.2.x? (this ticket 表明 Hibernate 5.2 的问题已修复,所以我的假设是它应该可以工作)
我想 运行 在一台服务器上创建一个独立的 Hazelcast 实例,并让应用程序的多个实例将其用作中央缓存位置,我在设置中缺少什么才能使其正常工作?
更新 1:
我已经编写了一小段代码成功连接到本地 hazelcast 实例(这是在我的开发机器上,与其余代码相同)
import com.hazelcast.client.HazelcastClient
import com.hazelcast.client.config.ClientConfig
import java.util.*
fun main(args: Array<String>) {
val config = ClientConfig()
config.getNetworkConfig().addAddress("127.0.0.1:5701")
val hazelcastInstance = HazelcastClient.newHazelcastClient(config)
val map = hazelcastInstance.getMap<String, String>("blah")
map.forEach { t, u ->
println(" $t -> $u ")
}
map.put("${Random().nextInt()}", "${Random().nextInt()}")
hazelcastInstance.shutdown()
}
为了证明它正在从缓存中存储和检索,我多次重新启动 main 方法,每次 blah
中的条目数都会增加
Run1:
No printlns
Run2:
1498523740 -> -1418154711
Run3:
1498523740 -> -1418154711
-248583979 -> -940621527
所以 Hazelcast 工作正常...
更新 2:
我现在可以从 Hibernate 连接 hazelcast,但它会在每次必须执行的查找时抛出异常。
从我的类路径中删除 hazelcast.xml
,然后删除组和密码选项,Hibernate 正在启动和连接。
paramsDefault.add("hibernate.cache.use_query_cache=true")
paramsDefault.add("hibernate.cache.use_second_level_cache=true")
paramsDefault.add("hibernate.cache.region.factory_class=com.hazelcast.hibernate.HazelcastCacheRegionFactory")
paramsDefault.add("hibernate.cache.hazelcast.use_native_client=true")
paramsDefault.add("hibernate.cache.hazelcast.native_client_address=127.0.0.1")
输出:
Sep 13, 2018 6:02:37 PM com.hazelcast.hibernate.HazelcastCacheRegionFactory
INFO: Starting up HazelcastCacheRegionFactory
Sep 13, 2018 6:02:37 PM com.hazelcast.core.LifecycleService
INFO: hz.client_0 [dev] [3.10.4] HazelcastClient 3.10.4 (20180727 - 0f51fcf) is STARTING
Sep 13, 2018 6:02:38 PM com.hazelcast.client.spi.ClientInvocationService
INFO: hz.client_0 [dev] [3.10.4] Running with 2 response threads
Sep 13, 2018 6:02:38 PM com.hazelcast.core.LifecycleService
INFO: hz.client_0 [dev] [3.10.4] HazelcastClient 3.10.4 (20180727 - 0f51fcf) is STARTED
Sep 13, 2018 6:02:38 PM com.hazelcast.client.connection.ClientConnectionManager
INFO: hz.client_0 [dev] [3.10.4] Trying to connect to [127.0.0.1]:5701 as owner member
Sep 13, 2018 6:02:38 PM com.hazelcast.client.connection.ClientConnectionManager
INFO: hz.client_0 [dev] [3.10.4] Setting ClientConnection{alive=true, connectionId=1, channel=NioChannel{/127.0.0.1:61191->/127.0.0.1:5701}, remoteEndpoint=[127.0.0.1]:5701, lastReadTime=2018-09-13 18:02:38.356, lastWriteTime=2018-09-13 18:02:38.352, closedTime=never, lastHeartbeatRequested=never, lastHeartbeatReceived=never, connected server version=3.10.4} as owner with principal ClientPrincipal{uuid='532bf500-e03e-4620-a9c2-14bb55c07166', ownerUuid='2fb66fa1-a17f-49fe-ba2b-bf585d43906d'}
Sep 13, 2018 6:02:38 PM com.hazelcast.client.connection.ClientConnectionManager
INFO: hz.client_0 [dev] [3.10.4] Authenticated with server [127.0.0.1]:5701, server version:3.10.4 Local address: /127.0.0.1:61191
Sep 13, 2018 6:02:38 PM com.hazelcast.client.spi.impl.ClientMembershipListener
INFO: hz.client_0 [dev] [3.10.4]
Members [1] {
Member [127.0.0.1]:5701 - 2fb66fa1-a17f-49fe-ba2b-bf585d43906d
}
Sep 13, 2018 6:02:38 PM com.hazelcast.core.LifecycleService
INFO: hz.client_0 [dev] [3.10.4] HazelcastClient 3.10.4 (20180727 - 0f51fcf) is CLIENT_CONNECTED
Sep 13, 2018 6:02:38 PM com.hazelcast.internal.diagnostics.Diagnostics
INFO: hz.client_0 [dev] [3.10.4] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
但是,正在检索的任何调用 Hazelcast 的实体都会停止。
我已经用 JAVA_OPTS
重新启动了 Hazelcast,看看它是否有所不同,看起来不像:
docker run --name=hazelcast -d=true -p 5701:5701 -e JAVA_OPTS="-Dhazelcast.local.publicAddress=127.0.0.1:5701" hazelcast/hazelcast:3.10.4
使用以下方法深入研究 Hazelcast 日志:
docker logs -f hazelcast
我看到了以下内容:
Sep 13, 2018 6:02:11 PM com.hazelcast.client.ClientEndpointManager
INFO: [127.0.0.1]:5701 [dev] [3.10.4] Destroying ClientEndpoint{connection=Connection[id=2, /172.17.0.2:5701->/172.17.0.1:56514, endpoint=[172.17.0.1]:56514, alive=false, type=JAVA_CLIENT], principal='ClientPrincipal{uuid='d8a9b730-c5fd-458c-9ab6-671aece99305', ownerUuid='2fb66fa1-a17f-49fe-ba2b-bf585d43906d'}, ownerConnection=true, authenticated=true, clientVersion=3.10.4, creationTime=1536861657874, latest statistics=null}
Sep 13, 2018 6:02:38 PM com.hazelcast.nio.tcp.TcpIpAcceptor
INFO: [127.0.0.1]:5701 [dev] [3.10.4] Accepting socket connection from /172.17.0.1:56516
Sep 13, 2018 6:02:38 PM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [127.0.0.1]:5701 [dev] [3.10.4] Established socket connection between /172.17.0.2:5701 and /172.17.0.1:56516
Sep 13, 2018 6:02:38 PM com.hazelcast.client.impl.protocol.task.AuthenticationMessageTask
INFO: [127.0.0.1]:5701 [dev] [3.10.4] Received auth from Connection[id=3, /172.17.0.2:5701->/172.17.0.1:56516, endpoint=null, alive=true, type=JAVA_CLIENT], successfully authenticated, principal: ClientPrincipal{uuid='532bf500-e03e-4620-a9c2-14bb55c07166', ownerUuid='2fb66fa1-a17f-49fe-ba2b-bf585d43906d'}, owner connection: true, client version: 3.10.4
Sep 13, 2018 6:03:11 PM com.hazelcast.transaction.TransactionManagerService
INFO: [127.0.0.1]:5701 [dev] [3.10.4] Committing/rolling-back live transactions of client, UUID: d8a9b730-c5fd-458c-9ab6-671aece99305
命中缓存后:
Sep 13, 2018 6:05:43 PM com.hazelcast.map.impl.operation.EntryOperation
SEVERE: [127.0.0.1]:5701 [dev] [3.10.4] java.lang.ClassNotFoundException: org.hibernate.cache.spi.entry.StandardCacheEntryImpl
com.hazelcast.nio.serialization.HazelcastSerializationException: java.lang.ClassNotFoundException: org.hibernate.cache.spi.entry.StandardCacheEntryImpl
at com.hazelcast.internal.serialization.impl.JavaDefaultSerializers$JavaSerializer.read(JavaDefaultSerializers.java:86)
at com.hazelcast.internal.serialization.impl.JavaDefaultSerializers$JavaSerializer.read(JavaDefaultSerializers.java:75)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:48)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.readObject(AbstractSerializationService.java:269)
at com.hazelcast.internal.serialization.impl.ByteArrayObjectDataInput.readObject(ByteArrayObjectDataInput.java:574)
at com.hazelcast.hibernate.serialization.Value.readData(Value.java:78)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.readInternal(DataSerializableSerializer.java:160)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:106)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:51)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:48)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toObject(AbstractSerializationService.java:187)
at com.hazelcast.query.impl.CachedQueryEntry.getValue(CachedQueryEntry.java:75)
at com.hazelcast.hibernate.distributed.LockEntryProcessor.process(LockEntryProcessor.java:49)
at com.hazelcast.hibernate.distributed.LockEntryProcessor.process(LockEntryProcessor.java:32)
at com.hazelcast.map.impl.operation.EntryOperator.process(EntryOperator.java:319)
at com.hazelcast.map.impl.operation.EntryOperator.operateOnKeyValueInternal(EntryOperator.java:182)
at com.hazelcast.map.impl.operation.EntryOperator.operateOnKey(EntryOperator.java:167)
at com.hazelcast.map.impl.operation.EntryOperation.runVanilla(EntryOperation.java:384)
at com.hazelcast.map.impl.operation.EntryOperation.call(EntryOperation.java:188)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:202)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:191)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.run(OperationExecutorImpl.java:406)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.runOrExecute(OperationExecutorImpl.java:433)
at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvokeLocal(Invocation.java:581)
at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:566)
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:525)
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:215)
at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:60)
at com.hazelcast.client.impl.protocol.task.AbstractPartitionMessageTask.processMessage(AbstractPartitionMessageTask.java:67)
at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.initializeAndProcessMessage(AbstractMessageTask.java:123)
at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.doRun(AbstractMessageTask.java:111)
at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.run(AbstractMessageTask.java:101)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:155)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:125)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:100)
Caused by: java.lang.ClassNotFoundException: org.hibernate.cache.spi.entry.StandardCacheEntryImpl
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at com.hazelcast.nio.ClassLoaderUtil.tryLoadClass(ClassLoaderUtil.java:173)
at com.hazelcast.nio.ClassLoaderUtil.loadClass(ClassLoaderUtil.java:147)
at com.hazelcast.nio.IOUtil$ClassLoaderAwareObjectInputStream.resolveClass(IOUtil.java:615)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1866)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1749)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2040)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
at com.hazelcast.internal.serialization.impl.JavaDefaultSerializers$JavaSerializer.read(JavaDefaultSerializers.java:82)
... 34 more
我是否需要在我的 Hazelcast Docker 设置中包含某种 JAR,或者这里发生了什么?
您似乎正试图使用来自 docker 网络之外的另一台服务器的环回地址。 您可能想尝试桥接以消除 docker 网络地址转换。 此外,由于 0.0.0.0 已绑定,因此所有 IP 地址都应安装 Hazelcast 侦听器。 我会简化并首先验证 Hazelcast。如果您有企业,则使用控制台应用程序,否则编写一个简单的启动服务器 java main。然后尝试使用真实 IP 地址连接客户端。一旦这有效,然后进行休眠配置。
终于让它工作了,这是需要的:
1:确保你拥有所有这三个依赖项,最初我错过了第一个,但由于某种原因,没有像预期的那样得到 ClassNotFound 异常。它似乎不是 hazelcast-client
或 hazelcast-hibernate52
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast</artifactId>
<version>${hazelcast.version}</version>
</dependency>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-hibernate52</artifactId>
<version>${hazelcast-hibernate.version}</version>
</dependency>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-client</artifactId>
<version>${hazelcast.version}</version>
</dependency>
2:如果您的 Hazelcast 开发实例没有密码,请不要指定密码。 127.0.0.1 工作正常,开发时无需 运行 在外部服务器上。
paramsDefault.add("hibernate.cache.use_query_cache=true")
paramsDefault.add("hibernate.cache.use_second_level_cache=true")
paramsDefault.add("hibernate.cache.region.factory_class=com.hazelcast.hibernate.HazelcastCacheRegionFactory")
paramsDefault.add("hibernate.cache.hazelcast.use_native_client=true")
paramsDefault.add("hibernate.cache.hazelcast.native_client_address=127.0.0.1")
// paramsDefault.add("hibernate.cache.hazelcast.native_client_group=$ENV")
// paramsDefault.add("hibernate.cache.hazelcast.native_client_password=dev-pass")
3:删除 hazelcast.xml
- 删除 hazelcast.xml
文件后,Hibernate 实际上启动了,即使我的 hazelcast.xml
文件只有一行代码说使用默认值配置
4: 确保所有实体都标记为可序列化,否则实体将不会缓存并导致 Hazelcast 服务器本身出现异常。
@Table
@Entity
@BatchSize(size = 50)
@PersistenceContext(unitName = "default")
@Cache(usage = CacheConcurrencyStrategy.READ_WRITE, region = "Tag")
class Tag(
name: String,
) : Serializable {
5:如果您的实体内部有 @OneToMany
、@ManyToOne
或其他实体,请确保这些实体也是 Serializable
。
6:编写一个小脚本以确保您的实体正在缓存:
import com.hazelcast.client.HazelcastClient
import com.hazelcast.client.config.ClientConfig
fun main(args: Array<String>) {
val config = ClientConfig()
config.getNetworkConfig().addAddress("127.0.0.1:5701")
val hazelcastInstance = HazelcastClient.newHazelcastClient(config)
val map = hazelcastInstance.getMap<Any, Any>("Tag")
println("=================")
map.forEach { t, u ->
println(" $t -> $u ")
}
println("=================")
hazelcastInstance.shutdown()
}
以上脚本将println
当前缓存中的所有Tag实体
7:启动Docker实例时,确保你已经暴露了端口,没有-p
选项,没有任何作用。
docker run --name=hazelcast -d=true -p 5701:5701 -e JAVA_OPTS="-Dhazelcast.local.publicAddress=127.0.0.1:5701" hazelcast/hazelcast:3.10.4
8:检查 Hazelcast 日志以查看您的 Java / Kotlin 客户端是否正在连接:
docker logs -f hazelcast
当有连接时,您应该会看到类似这样的内容:
Sep 13, 2018 9:05:06 PM com.hazelcast.client.ClientEndpointManager
INFO: [127.0.0.1]:5701 [dev] [3.10.4] Destroying ClientEndpoint{connection=Connection[id=32, /172.17.0.2:5701->/172.17.0.1:56574, endpoint=[172.17.0.1]:56574, alive=false, type=JAVA_CLIENT], principal='ClientPrincipal{uuid='99cbf1b4-d11c-462d-bd87-4c069bc9b2ef', ownerUuid='2fb66fa1-a17f-49fe-ba2b-bf585d43906d'}, ownerConnection=true, authenticated=true, clientVersion=3.10.4, creationTime=1536872631771, latest statistics=null}
Sep 13, 2018 9:05:19 PM com.hazelcast.nio.tcp.TcpIpAcceptor
INFO: [127.0.0.1]:5701 [dev] [3.10.4] Accepting socket connection from /172.17.0.1:56576
Sep 13, 2018 9:05:19 PM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [127.0.0.1]:5701 [dev] [3.10.4] Established socket connection between /172.17.0.2:5701 and /172.17.0.1:56576
Sep 13, 2018 9:05:19 PM com.hazelcast.client.impl.protocol.task.AuthenticationMessageTask
INFO: [127.0.0.1]:5701 [dev] [3.10.4] Received auth from Connection[id=33, /172.17.0.2:5701->/172.17.0.1:56576, endpoint=null, alive=true, type=JAVA_CLIENT], successfully authenticated, principal: ClientPrincipal{uuid='ff51de39-fd9c-4ecf-bdd4-bbdb6ec6c79e', ownerUuid='2fb66fa1-a17f-49fe-ba2b-bf585d43906d'}, owner connection: true, client version: 3.10.4
这些似乎是让 hazelcast 休眠正常工作的关键。