Hazelcast In Memory Data Grid o "¿Cómo meto mis datos aquí?"

Add by Jorge Moratilla | Feb 23, 2012 10:54  1798 |  20
Hazelcast
In Memory Data Grid o
"¿Cómo meto mis datos aquí?"
Download

Map Outline

Hazelcast In Memory Data Grid o "¿Cómo meto mis datos aquí?"
1 Introducción
1.1 Básicamente, qué es?
1.1.1 Es un sistema de distribución de estructuras de datos Java en memoria. Altamente escalable en entornos de Cluster y Grid, usando arquitectura distributed hashtable (DHT)
1.1.1.1 DHT
1.1.2 Características (1)
1.1.2.1 Existen Dos versiones
1.1.2.1.1 Community
1.1.2.1.1.1 Apache License 2.0
1.1.2.1.2 Enterprise
1.1.2.1.2.1 Características adicionales
1.1.2.1.2.1.1 Management Console
1.1.2.1.2.1.2 Elastic Memory
1.1.2.1.2.1.3 JAAS
1.1.2.2 Es Portable
1.1.2.2.1 Es Java
1.1.2.3 Incorpora soporte para estadísticas y eventos de miembros del cluster
1.1.2.4 Permite crear clusters dinámicos
1.1.2.4.1 Dynamic fail-over
1.1.2.4.2 Dynamic HTTP session clustering
1.1.2.4.3 Dynamic scaling to hundreds of servers
1.1.2.4.4 Dynamic partitioning with backups
1.1.2.5 Super rápido; miles de operaciones por segundo
1.1.2.6 Super eficiente; poco consumo de memoria y CPU
1.1.2.7 La configuración por defecto incluye 1 backup de todo, aunque es configurable
1.1.2.8 Sobre comunicaciones entre miembros del cluster
1.1.2.8.1 Redes
1.1.2.8.1.1 Multicast
1.1.2.8.1.2 TCP/IP
1.1.2.8.1.3 Soporta comunicaciones SSL
1.1.2.8.2 IO
1.1.2.8.2.1 Comunicaciones entre los miembros del cluster siempre con Java.NIO
1.1.3 Características (2)
1.1.3.1 Implementación Distribuida de Clases Java: Map, Set, Queue, List, Lock, Executor Service
1.1.3.2 Implementación Distribuida de Tópicos en mensajería pub/sub
1.1.3.3 Implementación Distribuida de listeners y events
1.1.3.4 Soporte a operaciones transaccionales en arquitecturas J2EE (JCA)
1.1.3.5 Muy Ligero (un único jar de 1'5M)
1.2 Usos comunes
1.2.1 Compartir datos/estados entre múltiples servidores (web session sharing)
1.2.2 Cachear los datos de forma distribuida para mejorar el rendimiento
1.2.3 Facilitar la Alta disponibilidad de la aplicación mediante cluster
1.2.4 Proporcionar comunicaciones seguras entre servidores
1.2.5 Particionar los datos en memoria
1.2.6 Enviar y Recibir mensajes entre aplicaciones
1.2.7 Distribuir la carga de trabajo entre servidores
1.2.8 Aplicar a la aplicación procesamiento paralelo
1.2.9 Proporcionar gestión de datos tolerante a fallos
1.3 Arquitectura
1.3.1 Stand Alone
1.3.1.1 run.sh
1.3.2 Embebido
1.3.2.1 Dentro de la aplicación
1.3.2.1.1 Cliente
1.3.2.1.2 SuperCliente
1.3.2.1.3 Nodo
1.3.2.2 Como recurso en una aplicación J2EE
1.3.3 Grid
1.3.3.1 Nodo... ¿Maestro?
1.3.3.1.1 There is no single cluster master or something that can cause single point of failure. Every node in the cluster has equal rights and responsibilities. No-one is superior. And no dependency on external 'server' or 'master' kind of concept.
1.3.3.2 Autenticación
1.3.3.3 Comunicaciones Multicast (Para redes confiables)
1.3.3.4 Comunicaciones TCP/IP para redes remotas
1.3.3.5 Soporte a AWS
1.3.4 Consumo
1.3.4.1 es una librería jar de 1,6MB
1.3.5 Configuración
1.3.5.1 Autenticación
1.3.5.1.1 <group> <name>dev</name> <password>dev-pass</password> </group>
1.3.5.2 Red
1.3.5.2.1 Puerto
1.3.5.2.1.1 <port auto-increment="true">5701</port>
1.3.5.2.2 MultiCast?
1.3.5.2.2.1 <multicast enabled="true"> <multicast-group>224.2.2.3</multicast-group> <multicast-port>54327</multicast-port> </multicast>
1.3.5.2.3 TCP?
1.3.5.2.3.1 <tcp-ip enabled="false"> <interface>127.0.0.1</interface> </tcp-ip>
1.3.5.2.4 SSL?
1.3.5.2.4.1 Simétrico
1.3.5.2.4.1.1 <symmetric-encryption enabled="false"> <!-- encryption algorithm such as DES/ECB/PKCS5Padding, PBEWithMD5AndDES, AES/CBC/PKCS5Padding, Blowfish, DESede --> <algorithm>PBEWithMD5AndDES</algorithm> <!-- salt value to use when generating the secret key --> <salt>thesalt</salt> <!-- pass phrase to use when generating the secret key --> <password>thepass</password> <!-- iteration count to use when generating the secret key --> <iteration-count>19</iteration-count> </symmetric-encryption>
1.3.5.2.4.2 Asimétrico
1.3.5.2.4.2.1 <asymmetric-encryption enabled="false"> <!-- encryption algorithm --> <algorithm>RSA/NONE/PKCS1PADDING</algorithm> <!-- private key password --> <keyPassword>thekeypass</keyPassword> <!-- private key alias --> <keyAlias>local</keyAlias> <!-- key store type --> <storeType>JKS</storeType> <!-- key store password --> <storePassword>thestorepass</storePassword> <!-- path to the key store --> <storePath>keystore</storePath> </asymmetric-encryption>
1.3.5.2.5 AWS?
1.3.5.2.5.1 <aws enabled="false"> <access-key>my-access-key</access-key> <secret-key>my-secret-key</secret-key> <!--optional, default is us-east-1 --> <region>us-west-1</region> <!-- optional, only instances belonging to this group will be discovered, default will try all running instances --> <security-group-name>hazelcast-sg</security-group-name> <tag-key>type</tag-key> <tag-value>hz-nodes</tag-value> </aws>
1.3.5.2.6 WAN Replication
1.3.5.2.6.1 <wan-replication name="my-wan-cluster"> <target-cluster group-name="tokyo" group-password="tokyo-pass"> <replication-impl>com.hazelcast.impl.wan.WanNoDelayReplication</replication-impl> <end-points> <address>10.2.1.1:5701</address> <address>10.2.1.2:5701</address> </end-points> </target-cluster> <target-cluster group-name="london" group-password="london-pass"> <replication-impl>com.hazelcast.impl.wan.WanNoDelayReplication</replication-impl> <end-points> <address>10.3.5.1:5701</address> <address>10.3.5.2:5701</address> </end-points> </target-cluster> </wan-replication>
1.3.5.3 Executors
1.3.5.3.1 Qué son?
1.3.5.3.1.1 Executor framework, which allows you to asynchronously execute your tasks, logical units of works, such as database query, complex calculation, image rendering etc.
1.3.5.4 Configuración de
1.3.5.4.1 Colas
1.3.5.4.2 Mapas
1.3.5.4.3 Semáforos
1.3.5.5 Edición del fichero hazelcast.xml
1.3.5.5.1 <?xml version="1.0" encoding="UTF-8"?> <hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-basic.xsd" xmlns="http://www.hazelcast.com/schema/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <group> <name>dev</name> <password>dev-pass</password> </group> <network> <port auto-increment="true">5701</port> <join> <multicast enabled="true"> <multicast-group>224.2.2.3</multicast-group> <multicast-port>54327</multicast-port> </multicast> <tcp-ip enabled="false"> <hostname>tsort.local</hostname> <interface>10.0.2.10</interface> </tcp-ip> <aws enabled="false"> <access-key>my-access-key</access-key> <secret-key>my-secret-key</secret-key> <!--optional, default is us-east-1 --> <region>us-west-1</region> <!-- optional, only instances belonging to this group will be discovered, default will try all running instances --> <security-group-name>hazelcast-sg</security-group-name> <tag-key>type</tag-key> <tag-value>hz-nodes</tag-value> </aws> </join> <interfaces enabled="true"> <interface>10.0.2.*</interface> </interfaces> <symmetric-encryption enabled="false"> <!-- encryption algorithm such as DES/ECB/PKCS5Padding, PBEWithMD5AndDES, AES/CBC/PKCS5Padding, Blowfish, DESede --> <algorithm>PBEWithMD5AndDES</algorithm> <!-- salt value to use when generating the secret key --> <salt>thesalt</salt> <!-- pass phrase to use when generating the secret key --> <password>thepass</password> <!-- iteration count to use when generating the secret key --> <iteration-count>19</iteration-count> </symmetric-encryption> <asymmetric-encryption enabled="false"> <!-- encryption algorithm --> <algorithm>RSA/NONE/PKCS1PADDING</algorithm> <!-- private key password --> <keyPassword>thekeypass</keyPassword> <!-- private key alias --> <keyAlias>local</keyAlias> <!-- key store type --> <storeType>JKS</storeType> <!-- key store password --> <storePassword>thestorepass</storePassword> <!-- path to the key store --> <storePath>keystore</storePath> </asymmetric-encryption> </network> <executor-service> <core-pool-size>16</core-pool-size> <max-pool-size>64</max-pool-size> <keep-alive-seconds>60</keep-alive-seconds> </executor-service> <queue name="default"> <!-- Maximum size of the queue. When a JVM's local queue size reaches the maximum, all put/offer operations will get blocked until the queue size of the JVM goes down below the maximum. Any integer between 0 and Integer.MAX_VALUE. 0 means Integer.MAX_VALUE. Default is 0. --> <max-size-per-jvm>0</max-size-per-jvm> <!-- Name of the map configuration that will be used for the backing distributed map for this queue. --> <backing-map-ref>default</backing-map-ref> </queue> <map name="default"> <!-- Number of backups. If 1 is set as the backup-count for example, then all entries of the map will be copied to another JVM for fail-safety. 0 means no backup. --> <backup-count>1</backup-count> <!-- Maximum number of seconds for each entry to stay in the map. Entries that are older than <time-to-live-seconds> and not updated for <time-to-live-seconds> will get automatically evicted from the map. Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0. --> <time-to-live-seconds>0</time-to-live-seconds> <!-- Maximum number of seconds for each entry to stay idle in the map. Entries that are idle(not touched) for more than <max-idle-seconds> will get automatically evicted from the map. Entry is touched if get, put or containsKey is called. Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0. --> <max-idle-seconds>0</max-idle-seconds> <!-- Valid values are: NONE (no eviction), LRU (Least Recently Used), LFU (Least Frequently Used). NONE is the default. --> <eviction-policy>NONE</eviction-policy> <!-- Maximum size of the map. When max size is reached, map is evicted based on the policy defined. Any integer between 0 and Integer.MAX_VALUE. 0 means Integer.MAX_VALUE. Default is 0. --> <max-size policy="cluster_wide_map_size">0</max-size> <!-- When max. size is reached, specified percentage of the map will be evicted. Any integer between 0 and 100. If 25 is set for example, 25% of the entries will get evicted. --> <eviction-percentage>25</eviction-percentage> <!-- While recovering from split-brain (network partitioning), map entries in the small cluster will merge into the bigger cluster based on the policy set here. When an entry merge into the cluster, there might an existing entry with the same key already. Values of these entries might be different for that same key. Which value should be set for the key? Conflict is resolved by the policy set here. Default policy is hz.ADD_NEW_ENTRY There are built-in merge policies such as hz.NO_MERGE ; no entry will merge. hz.ADD_NEW_ENTRY ; entry will be added if the merging entry's key doesn't exist in the cluster. hz.HIGHER_HITS ; entry with the higher hits wins. hz.LATEST_UPDATE ; entry with the latest update wins. --> <merge-policy>hz.ADD_NEW_ENTRY</merge-policy> </map> <!-- Add your own semaphore configurations here: <semaphore name="default"> <initial-permits>10</initial-permits> <semaphore-factory enabled="true"> <class-name>com.acme.MySemaphoreFactory</class-name> </semaphore-factory> </semaphore> --> <!-- Add your own map merge policy implementations here: <merge-policies> <map-merge-policy name="MY_MERGE_POLICY"> <class-name>com.acme.MyOwnMergePolicy</class-name> </map-merge-policy> </merge-policies> --> </hazelcast>
2 Práctica
2.1 StandAlone
2.1.1 Ejecutar el cliente run.sh
2.1.1.1 añadir
2.1.1.1.1 mapas
2.1.1.1.1.1 m.put key value
2.1.1.1.2 listas
2.1.1.1.2.1 l.add item
2.1.1.1.3 sets
2.1.1.1.3.1 s.add item
2.1.1.1.4 queues
2.1.1.1.4.1 q.offer string
2.1.1.2 borrar
2.1.1.2.1 mapas
2.1.1.2.1.1 m.remove key
2.1.1.2.2 listas
2.1.1.2.2.1 l.remove item
2.1.1.2.3 sets
2.1.1.2.3.1 s.remove item
2.1.1.2.4 queues
2.1.1.2.4.1 q.poll
2.1.1.3 realizar bloqueos
2.1.1.3.1 lock key
2.1.1.3.2 trylock key
2.1.1.3.3 unlock key
2.1.1.4 Ejecutar un cliente Java que realice acciones
2.1.1.4.1 import com.hazelcast.core.HazelcastInstance; import com.hazelcast.client.HazelcastClient; import java.util.Map; import java.util.Collection; class HelloHazelcast { public static void main(String[] args) throws Exception { // If the connected member dies client will // switch to the next one in the list. HazelcastInstance client = HazelcastClient.newHazelcastClient("dev", "dev-pass", "XXXX","YYYY:5702","ZZZZ"); //All Cluster Operations that you can do with ordinary HazelcastInstance Map<String, String> mapCustomers = client.getMap("customers"); mapCustomers.put("1", "Joe Smith"); mapCustomers.put("2", "Ali Selam"); mapCustomers.put("3", "Avi Noyan"); mapCustomers.put("4", "San Carter"); mapCustomers.put("5", "Samantha Carter"); Collection<String> colCustomers = mapCustomers.values(); for (String customer : colCustomers) { // process customer System.out.println("Customer fullName is: " + customer); } // Exiting... client.shutdown(); } }
2.1.2 Arrancar varios nodos del cluster en una máquina
2.1.3 Arrancar otros nodos del cluster usando la wifi del Mac y un sistema de ficheros compartido
2.1.4 Probar las transacciones y eventos/mensajería
2.1.5 Desconectar nodos del cluster
3 Artículos y fuentes
3.1 Artículos
3.1.1 http://java.dzone.com/articles/comparison-gridcloud-computing
3.1.2 http://www.briandupreez.net/2010/09/hazelcast-simple-distributed-caching.html
3.1.3 http://www.ibm.com/developerworks/java/library/j-jca/
3.2 Información
3.2.1 Web oficial de Hazelcast.com
3.2.2 Proyecto en Google Code
3.2.3 Grupos de Google
3.3 Tutoriales
3.4 Videos y presentaciones
3.4.1 http://www.slideshare.net/oztalip/hazelcast
3.4.2 Vídeos
3.4.2.1 http://www.hazelcast.com/screencast.jsp
3.4.2.2 http://www.hazelcast.com/screencast.jsp?s=100_node_cluster
3.4.2.3 http://www.youtube.com/watch?v=DozGQMHRoZI

More Maps From User