Обнаружен блог Cameron Purdy.
Cameron Purdy was the founder and president of Tangosol, Inc, a market leader in delivering in-memory caching and data management solutions to companies building and running mission critical enterprise J2EE applications. He currently is VP of Development at Oracle.
P.S. Tangosol разработал Coherence, был куплен Ораклом за 200М.
P.P.S. Cameron Purdy on Scaling Out Data Grids.
понедельник, 7 марта 2011 г.
среда, 2 марта 2011 г.
Deep-deep-deep Google internals: Chubby
1. "The Chubby Lock Service for Loosely-Coupled Distributed Systems"
Mike Burrows, Google Inc.
Abstract
We describe our experiences with the Chubby lock service, which is intended to provide coarse-grained locking as well as reliable (though low-volume) storage for a loosely-coupled distributed system. Chubby provides an interface much like a distributed file system with advisory locks, but the design emphasis is on availability and reliability, as opposed to high performance. Many instances of the service have been used for over a year, with several of them each handling a few tens of thousands of clients concurrently. The paper describes the initial design and expected use, compares it with actual use, and explains how the design had to be modified to accommodate the differences.
OSDI'06: Seventh Symposium on Operating System Design and Implementation, Seattle, WA, November, 2006.
"We expected Chubby to help developers deal with coarse-grained synchronization within their systems, and in particular to deal with the problem of electing a leader from among a set of otherwise equivalent servers. For example, the Google File System [7] uses a Chubby lock to appoint a GFS master server, and Bigtable [3] uses Chubby in several ways: to elect a master, to allow the master to discover the servers it controls, and to permit clients to find the master. In addition, both GFS and Bigtable use Chubby as a well-known and available location to store a small amount of meta-data; in effect they use Chubby as the root of their distributed data structures. Some services use locks to partition work (at a coarse grain) between several servers."
-------------
"Paxos Made Live - An Engineering Perspective"
Paxos Made Live - An Engineering Perspective
Tushar Chandra
Robert Griesemer
Joshua Redstone
June 20, 2007
Abstract
We describe our experience in building a fault-tolerant data-base using the Paxos consensus algorithm. Despite the existing literature in the field, building such a database proved to be non-trivial. We describe selected algorithmic and engineering problems encountered, and the solutions we found for them. Our measurements indicate that we have built a competitive system.
Mike Burrows, Google Inc.
Abstract
We describe our experiences with the Chubby lock service, which is intended to provide coarse-grained locking as well as reliable (though low-volume) storage for a loosely-coupled distributed system. Chubby provides an interface much like a distributed file system with advisory locks, but the design emphasis is on availability and reliability, as opposed to high performance. Many instances of the service have been used for over a year, with several of them each handling a few tens of thousands of clients concurrently. The paper describes the initial design and expected use, compares it with actual use, and explains how the design had to be modified to accommodate the differences.
OSDI'06: Seventh Symposium on Operating System Design and Implementation, Seattle, WA, November, 2006.
"We expected Chubby to help developers deal with coarse-grained synchronization within their systems, and in particular to deal with the problem of electing a leader from among a set of otherwise equivalent servers. For example, the Google File System [7] uses a Chubby lock to appoint a GFS master server, and Bigtable [3] uses Chubby in several ways: to elect a master, to allow the master to discover the servers it controls, and to permit clients to find the master. In addition, both GFS and Bigtable use Chubby as a well-known and available location to store a small amount of meta-data; in effect they use Chubby as the root of their distributed data structures. Some services use locks to partition work (at a coarse grain) between several servers."
-------------
"Paxos Made Live - An Engineering Perspective"
Paxos Made Live - An Engineering Perspective
Tushar Chandra
Robert Griesemer
Joshua Redstone
June 20, 2007
Abstract
We describe our experience in building a fault-tolerant data-base using the Paxos consensus algorithm. Despite the existing literature in the field, building such a database proved to be non-trivial. We describe selected algorithmic and engineering problems encountered, and the solutions we found for them. Our measurements indicate that we have built a competitive system.
Ярлыки:
Chubby
вторник, 1 марта 2011 г.
Ищу работу
Добрый день.
Я подумываю о смене работы.
Ищу пописать какую-нибудь низкоуровневую распределенную инфраструктуру на основе Netty/JGroups/etc.
Или сервер многопоточный какой.
Или распределенной хранилище данных.
Или инфраструктуру для MMOGame.
Или поюзать какую-нибудь низкоуровневую распределенную инфраструктуру: Cassandra, HBase, HDFS, Hadoop, PIG, etc
Бизнес-проекты на {GWT,JSF}/{Spring, EJB}/Hibernate/RDBMS пока не рассматриваю.
Рассмотрю любой город бСССР и любой режим работы.
Могу менторить юниоров.
К "печенькам" не требователен.
P.S. Golovach.Ivan@gmail.com
P.P.S. На письма вида "высылайте резюме, а потом мы вам как-нибудь расскажем про проект" не отвечу.
Я подумываю о смене работы.
Ищу пописать какую-нибудь низкоуровневую распределенную инфраструктуру на основе Netty/JGroups/etc.
Или сервер многопоточный какой.
Или распределенной хранилище данных.
Или инфраструктуру для MMOGame.
Или поюзать какую-нибудь низкоуровневую распределенную инфраструктуру: Cassandra, HBase, HDFS, Hadoop, PIG, etc
Бизнес-проекты на {GWT,JSF}/{Spring, EJB}/Hibernate/RDBMS пока не рассматриваю.
Рассмотрю любой город бСССР и любой режим работы.
Могу менторить юниоров.
К "печенькам" не требователен.
P.S. Golovach.Ivan@gmail.com
P.P.S. На письма вида "высылайте резюме, а потом мы вам как-нибудь расскажем про проект" не отвечу.
суббота, 19 февраля 2011 г.
Почему я заинтересовался IMDG.
Десятилетие 2005-2015 - это десятилетие (в том числе) The Concurrency Revolution.
Но текущая ситуация такова, что ядра то лепят (уже десятками), ноды объединяют в кластеры (уже сотнями), а новых подходов все нет.
Threads, Actors - это не выход: 1) слишком низкоуровнево, 2) количество сущностей thread/actor должно расти с колвом ядер.
Fork/Join как попытка уйти от явного указания кол-ва потоков - применима если мы сами параллелим задачу. Колво потоков задает framework, но правило разбиения задачи - задаем мы.
Моя идея такова, что чем более масштабируема система/задача, тем более слабой моделью памати она обладает. На уровне процессора(x86 mem model) или языка (New Java Mem Model) модель памяти уже задана, и мы можем только спорить - хорошо ли приспособлена Java для 100K потоков, или тут уже нужен Erlang.
Но если мы сами создаем Модель Памяти? Скажем берем realiable multicasting, на нем строим другие мулькастинги (atomic, causal, total order, etc), на них строим любые модели Shared Memory, поверх Shared Memory реализуем какой-нибудь механизм координации - что-то вроди Linda. Это текущие процессы.
А если так: business people пишут свою логику, а система по логике 1) находит слабейшую можель памяти на которой это еще будет работать, 2) указывает, какие моменты в бизнесс-правилах наиболее органичивают систему в масштабировании. Под эту модель памяти поднимается Grid. Т.е. масштабируемость системы зависит исключительно от бизнесс-логики, а не от того какой кэш вы выбрали: GemFire ReplicatedCache или Coherence PartitionedCache.
P.S. В моем представлении, сейчас период безвременья между Февральской и Октябрьскими Революциями. Царя уже свергли (ядра лепят, кластеры поднимают), но нет мощной идеи определяющей развитие на десятилетие вперед (конституционная монархия, либеральная демократия, мировая революция == Shared Memory, MOM, Linda, IMDG, Actors, ...).
P.P.S. Естественно государственный деятель должен учитывать вызовы будущего - мировые войны, модернизацию промышленности, решение проблем урбанизации (я имею в виду квантовые компьютеры с миллионами/миллиардами "потоков" между которыми очень-очень странные и асимметричные взаимодействия).
Но текущая ситуация такова, что ядра то лепят (уже десятками), ноды объединяют в кластеры (уже сотнями), а новых подходов все нет.
Threads, Actors - это не выход: 1) слишком низкоуровнево, 2) количество сущностей thread/actor должно расти с колвом ядер.
Fork/Join как попытка уйти от явного указания кол-ва потоков - применима если мы сами параллелим задачу. Колво потоков задает framework, но правило разбиения задачи - задаем мы.
Моя идея такова, что чем более масштабируема система/задача, тем более слабой моделью памати она обладает. На уровне процессора(x86 mem model) или языка (New Java Mem Model) модель памяти уже задана, и мы можем только спорить - хорошо ли приспособлена Java для 100K потоков, или тут уже нужен Erlang.
Но если мы сами создаем Модель Памяти? Скажем берем realiable multicasting, на нем строим другие мулькастинги (atomic, causal, total order, etc), на них строим любые модели Shared Memory, поверх Shared Memory реализуем какой-нибудь механизм координации - что-то вроди Linda. Это текущие процессы.
А если так: business people пишут свою логику, а система по логике 1) находит слабейшую можель памяти на которой это еще будет работать, 2) указывает, какие моменты в бизнесс-правилах наиболее органичивают систему в масштабировании. Под эту модель памяти поднимается Grid. Т.е. масштабируемость системы зависит исключительно от бизнесс-логики, а не от того какой кэш вы выбрали: GemFire ReplicatedCache или Coherence PartitionedCache.
P.S. В моем представлении, сейчас период безвременья между Февральской и Октябрьскими Революциями. Царя уже свергли (ядра лепят, кластеры поднимают), но нет мощной идеи определяющей развитие на десятилетие вперед (конституционная монархия, либеральная демократия, мировая революция == Shared Memory, MOM, Linda, IMDG, Actors, ...).
P.P.S. Естественно государственный деятель должен учитывать вызовы будущего - мировые войны, модернизацию промышленности, решение проблем урбанизации (я имею в виду квантовые компьютеры с миллионами/миллиардами "потоков" между которыми очень-очень странные и асимметричные взаимодействия).
Ярлыки:
IMDG
Зачем писать свой IMDG?
Info: лицензия за GemFire на 1 нод на один год - 25K$. Т.е. кластер на 40 нодов - 1M$/year.
1.000.000$ с КАЖДОГО бизнеса КАЖДЫЙ год пользования твоим IMDG!
Так это еще и низкая цена - Oracle Coherence стоит заметно дороже.
P.S. Да к черту эти стартапы! Социальные сети для собак, e-shops, новейшие сервисы по отсылу СМС, ... Все на написание IMDG!
1.000.000$ с КАЖДОГО бизнеса КАЖДЫЙ год пользования твоим IMDG!
Так это еще и низкая цена - Oracle Coherence стоит заметно дороже.
P.S. Да к черту эти стартапы! Социальные сети для собак, e-shops, новейшие сервисы по отсылу СМС, ... Все на написание IMDG!
JGroups: author blog
Обнаружен блог автора и основного разработчика JGroups. Также он еще и разработчик JBoss Infinispan (IMDG).
Ярлыки:
IMDG,
Infinispan,
JGroups
пятница, 18 февраля 2011 г.
IMDG Internals: JGroups
Давно уже чесались взяться за JGroups - a toolkit for reliable multicast communication.
И вот выяснилось, что ее используют Ehcache, GemFire, JBoss Infinispan:
Ehcache - Replicated Caching using JGroups.
Gemfire - для Member Discovery (не уверен на счет Communication and Data Transfer)
JBoss Infinispan
Тут много всяких доков включая даже книгу (ссылка не работает). "Ken Birman: Building Secure and Reliable Network Applications. Excellent book on group communication. In-depth description of various protocols (atomic bcast, causal, total order, probabilistic broadcast). Very technical content."
P.S. Есть желание изучить ключевые либы, на которых строятся cutting edge проекты для JVM. Пока обнаружено два кандидата: JGroups и Netty (пожулуй самая производительная либа для NIO). Netty вместе с автором сейчас работают над JBoss Infinispan.
И вот выяснилось, что ее используют Ehcache, GemFire, JBoss Infinispan:
Ehcache - Replicated Caching using JGroups.
Gemfire - для Member Discovery (не уверен на счет Communication and Data Transfer)
JBoss Infinispan
Тут много всяких доков включая даже книгу (ссылка не работает). "Ken Birman: Building Secure and Reliable Network Applications. Excellent book on group communication. In-depth description of various protocols (atomic bcast, causal, total order, probabilistic broadcast). Very technical content."
P.S. Есть желание изучить ключевые либы, на которых строятся cutting edge проекты для JVM. Пока обнаружено два кандидата: JGroups и Netty (пожулуй самая производительная либа для NIO). Netty вместе с автором сейчас работают над JBoss Infinispan.
среда, 16 февраля 2011 г.
Собираю материал по Memory Consistency Models of IMDGs
Собираю материал по Memory Consistency Models of IMDGs.
Помогите кто чем богат:)
В корпоративном блоге Алексей Рогозин начал писать серию статей "Data Grid Pattern-s". Я зацепился за Data Grid Pattern - Network shared memory.
Как известно всякая Shared Memory должна обладать какой-то Memory Consistency Model. Но по бедноте материала на эту тему даже от "монстров" рынка: Oracle Coherence, IBM WebSphere eXtreme Scale стало понятно, что это "больное место" IMDG (да собственно и большинства NoSQL-решений).
Помогите кто чем богат:)
В корпоративном блоге Алексей Рогозин начал писать серию статей "Data Grid Pattern-s". Я зацепился за Data Grid Pattern - Network shared memory.
Как известно всякая Shared Memory должна обладать какой-то Memory Consistency Model. Но по бедноте материала на эту тему даже от "монстров" рынка: Oracle Coherence, IBM WebSphere eXtreme Scale стало понятно, что это "больное место" IMDG (да собственно и большинства NoSQL-решений).
Ярлыки:
IMDG,
Memory Consistency Model,
Memory Model
понедельник, 14 февраля 2011 г.
воскресенье, 30 января 2011 г.
понедельник, 24 января 2011 г.
Coordination language: Linda
- Linda (wiki)
- from author: "Coordination Languages and their Significance"
- 10 years expirience: "Coordination Languages: Back to the Future with Linda"
- from Java viewpoint: "Linda implementations in Java for concurrent systems"
P.S. JavaSpaces is an implementation of Linda in Java by Sun, incorporated into the Jini project.
P.P.S. JavaSpaces is a base for GigaSpaces XAP.
- from author: "Coordination Languages and their Significance"
- 10 years expirience: "Coordination Languages: Back to the Future with Linda"
- from Java viewpoint: "Linda implementations in Java for concurrent systems"
P.S. JavaSpaces is an implementation of Linda in Java by Sun, incorporated into the Jini project.
P.P.S. JavaSpaces is a base for GigaSpaces XAP.
Ярлыки:
Coordination language,
GigaSpaces,
JavaSpaces,
Jini,
Linda,
XAP
пятница, 21 января 2011 г.
Mostly Concurrent Compaction for Mark-Sweep GC
Mostly Concurrent Compaction for Mark-Sweep GC
ABSTRACT
A memory manager that does not move objects may suffer from memory fragmentation. Compaction is an efficient, and sometimes inevitable, mechanism for reducing fragmentation. A Mark-Sweep garbage collector must occasionally execute a compaction, usually while the application is suspended. Compaction during pause time can have detrimental effects for interactive applications that require guarantees for maximal pause time. This work presents a method for reducing the pause time created by compaction at a negligible throughput hit. The solution is most suitable when added to a Mark-Sweep garbage collector.
Compaction normally consists of two major activities: the moving of objects and the update of all the objects’ references to the new locations. We present a method for executing the reference updates concurrently, thus eliminating a substantial portion of the pause time hit. To reduce the time for moving objects in each compaction, we use the existing technique of incremental compaction, but select the optimal area to compact. Selecting the area is done after executing the mark and sweep phases, and is based on their results.
We implemented our compaction on top of the IBM J9 JVM V2.2, and present measurements of its effect on pause time, throughput, and mutator utilization. We show that our compaction is indeed an efficient fragmentation reduction tool, and that it improves the performance of a few of the benchmarks we used, with very little increase in the pause time (typically far below the cost of the mark phase).
P.S. Найдено тут. Проект ManagedRuntime.org интересен сам по себе. Создан компанией Azul, как я опнял, они частично переписали OpenJDK + подсистему работы с памятью Linux.
ABSTRACT
A memory manager that does not move objects may suffer from memory fragmentation. Compaction is an efficient, and sometimes inevitable, mechanism for reducing fragmentation. A Mark-Sweep garbage collector must occasionally execute a compaction, usually while the application is suspended. Compaction during pause time can have detrimental effects for interactive applications that require guarantees for maximal pause time. This work presents a method for reducing the pause time created by compaction at a negligible throughput hit. The solution is most suitable when added to a Mark-Sweep garbage collector.
Compaction normally consists of two major activities: the moving of objects and the update of all the objects’ references to the new locations. We present a method for executing the reference updates concurrently, thus eliminating a substantial portion of the pause time hit. To reduce the time for moving objects in each compaction, we use the existing technique of incremental compaction, but select the optimal area to compact. Selecting the area is done after executing the mark and sweep phases, and is based on their results.
We implemented our compaction on top of the IBM J9 JVM V2.2, and present measurements of its effect on pause time, throughput, and mutator utilization. We show that our compaction is indeed an efficient fragmentation reduction tool, and that it improves the performance of a few of the benchmarks we used, with very little increase in the pause time (typically far below the cost of the mark phase).
P.S. Найдено тут. Проект ManagedRuntime.org интересен сам по себе. Создан компанией Azul, как я опнял, они частично переписали OpenJDK + подсистему работы с памятью Linux.
вторник, 28 декабря 2010 г.
Consistent hashing
Consistent hashing is a scheme that provides hash table functionality in a way that the addition or removal of one slot does not significantly change the mapping of keys to slots. In contrast, in most traditional hash tables, a change in the number of array slots causes nearly all keys to be remapped. By using consistent hashing, only K/n keys need to be remapped on average, where K is the number of keys, and n is the number of slots.
Wiki: Consistent hashing
Theory: Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web
Practice: Web caching with consistent hashing
Finally, you may not know this, but you use consistent hashing every time you put something in your cart at Amazon.com. Their massively scalable data store, Dynamo, uses this technique. Or if you use Last.fm, you’ve used a great combination: consistent hashing + memcached.
Wiki: Consistent hashing
Theory: Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web
Practice: Web caching with consistent hashing
Finally, you may not know this, but you use consistent hashing every time you put something in your cart at Amazon.com. Their massively scalable data store, Dynamo, uses this technique. Or if you use Last.fm, you’ve used a great combination: consistent hashing + memcached.
Ярлыки:
Amazon,
Consistent hashing,
Last.fm,
random trees
понедельник, 27 декабря 2010 г.
DarkStar #1: Dynamic Adaptation of User Migration Policies in Distributed Virtual Environments
Dynamic Adaptation of User Migration Policies in Distributed Virtual Environments
Abstract
A distributed virtual environment (DVE) consists of multiple network nodes (servers), each of which can host many users that consume CPU resources on that node and communicate with users on other nodes. Users can be dynamically migrated between the nodes, and the ultimate goal for the migration policy is to minimize the average system response time perceived by the users. In order to achieve this, the user migration policy should minimize network communication while balancing the load among the nodes so CPU resources of the individual nodes are not overwhelmed. This paper considers a multiplayer online game as an example of a DVE and presents an adaptive distributed user migration policy, which uses Reinforcement Learning to tune itself and thus minimize the average system response time perceived by the users. Performance of the self tuning policy was compared on a simulator with the standard benchmark non-adaptive migration policy and with the optimal static user allocation policy in a variety of scenarios, and the self-tuning policy was shown to greatly outperform both benchmark policies, with performance difference increasing as the network became more overloaded. These results provide yet another demonstration of the power and generality of the methodology for designing adaptive distributed and scalable migration policies, which has already been applied successfully to several other domains [17, 18].
P.S. Wiki:Project Darkstar
Abstract
A distributed virtual environment (DVE) consists of multiple network nodes (servers), each of which can host many users that consume CPU resources on that node and communicate with users on other nodes. Users can be dynamically migrated between the nodes, and the ultimate goal for the migration policy is to minimize the average system response time perceived by the users. In order to achieve this, the user migration policy should minimize network communication while balancing the load among the nodes so CPU resources of the individual nodes are not overwhelmed. This paper considers a multiplayer online game as an example of a DVE and presents an adaptive distributed user migration policy, which uses Reinforcement Learning to tune itself and thus minimize the average system response time perceived by the users. Performance of the self tuning policy was compared on a simulator with the standard benchmark non-adaptive migration policy and with the optimal static user allocation policy in a variety of scenarios, and the self-tuning policy was shown to greatly outperform both benchmark policies, with performance difference increasing as the network became more overloaded. These results provide yet another demonstration of the power and generality of the methodology for designing adaptive distributed and scalable migration policies, which has already been applied successfully to several other domains [17, 18].
P.S. Wiki:Project Darkstar
Ярлыки:
DarkStar,
DVE,
migration policies
DarkStar #0: Scalable Data Storage in Project Darkstar
Scalable Data Storage in Project Darkstar
Abstract
We present a new scheme for building scalable data storage for Project Darkstar, an infrastructure for building online games and virtual worlds. The approach promises to provide data storage with horizontal scaling that is tailored to the special requirements of online environments and that takes advantage of modern multi-core architectures and high throughput networking.
After a brief overview of Project Darkstar, we describe the overall architecture for a caching data store. Then we provide more detail on the individual components used in the solution. Finally, we suggest some of the additional facilities that will be required to bring the full experiment to completion.
P.S. Wiki:Project Darkstar
Abstract
We present a new scheme for building scalable data storage for Project Darkstar, an infrastructure for building online games and virtual worlds. The approach promises to provide data storage with horizontal scaling that is tailored to the special requirements of online environments and that takes advantage of modern multi-core architectures and high throughput networking.
After a brief overview of Project Darkstar, we describe the overall architecture for a caching data store. Then we provide more detail on the individual components used in the solution. Finally, we suggest some of the additional facilities that will be required to bring the full experiment to completion.
P.S. Wiki:Project Darkstar
Ярлыки:
BerkeleyDB,
DarkStar,
distributed storage,
storage
пятница, 10 декабря 2010 г.
Some words about distributed frameworks
Some words about distributed frameworks in post Characterizing Enterprise Systems using the CAP theorem.
* RDBMS
* Amazon Dynamo
* Terracotta
* Oracle Coherence
* GigaSpaces
* Cassandra
* CouchDB
* Voldemort
* Google BigTable
* RDBMS
* Amazon Dynamo
* Terracotta
* Oracle Coherence
* GigaSpaces
* Cassandra
* CouchDB
* Voldemort
* Google BigTable
Ярлыки:
CAP Theorem
STM in Scala
Scala 2.9, возможно, будет включать STM.
Reference implementation основана на CCSTM.
P.S. CCSTM: A Library-Based STM for Scala
Reference implementation основана на CCSTM.
P.S. CCSTM: A Library-Based STM for Scala
понедельник, 29 ноября 2010 г.
[PJP]: Dynamic Class Loading in the Java Virtual Machine
Sheng Liang and Gilad Bracha, "Dynamic Class Loading in the Java Virtual Machine", ACM OOPSLA'98, pp.36-44, 1998.
Abstract
Class loaders are a powerful mechanism for dynamically loading software components on the Java platform. They are unusual in supporting all of the following features: laziness, type-safe linkage, user-defined extensibility, and multiple communicating namespaces.
We present the notion of class loaders and demonstrate some of their interesting uses. In addition,we discuss howto maintain type safety in the presence of user-defined dynamic class loading.
P.S. Найдено в tutorial on Javassist.
P.P.S. Интересный пример:
MyClassLoader myLoader = new MyClassLoader();
Class clazz = myLoader.loadClass("Box");
Object obj = clazz.newInstance();
Box b = (Box)obj; // this always throws ClassCastException.
Так как не до конца определено, кто же загрузчик класса Box (в той части, где - 'Box b = (Box)...').
Abstract
Class loaders are a powerful mechanism for dynamically loading software components on the Java platform. They are unusual in supporting all of the following features: laziness, type-safe linkage, user-defined extensibility, and multiple communicating namespaces.
We present the notion of class loaders and demonstrate some of their interesting uses. In addition,we discuss howto maintain type safety in the presence of user-defined dynamic class loading.
P.S. Найдено в tutorial on Javassist.
P.P.S. Интересный пример:
MyClassLoader myLoader = new MyClassLoader();
Class clazz = myLoader.loadClass("Box");
Object obj = clazz.newInstance();
Box b = (Box)obj; // this always throws ClassCastException.
Так как не до конца определено, кто же загрузчик класса Box (в той части, где - 'Box b = (Box)...').
Ярлыки:
ClassLoader,
Javassist
Подписаться на:
Сообщения (Atom)