Selling software: get inspired

Selling software?

These are a few tips applied from great companies in other branches.

They will fit to you business model as well. It will require some adaptations

IKEA idea 1: the wall lamp KVART costs about 5 EUR but requires a light bulb with at least 400 lumen and a E14 socket. This light bulb costs 4 EUR.

  • Lesson 1: sell the main product cheap. Make it a must to have expensive accessories for it. The product is just a naked frame where other expensive parts should be added.
  • RYANAIR idea 2: put a lot of add-ons in the selling process when the customer is on the to the check-out.
  • Lesson 2: make the check out a complex process made of many steps. You will have the opportunity to add new sells at each step.
  • IKEA idea 3: the department stores has cheap alluring items at the end of the store in the area between storage and payment.
  • Lesson 3: add cheap alluring items at the very last sell process step
  • IKEA idea 4: Buyers can change bought items at their will. It will cost the money for a travel to the IKEA store(fuel, car maintenance) but they do not think about this. Clients are given the opportunity to fill an IKEA cash bonus card when returning items back. This card can be used to buy goods at the IKEA store only.
  • Lesson 4: make the clients do the quality control work. Use bad quality to increase sells. Never heard of scrap-ware?
  • Lesson 5: put the burden of quality control on your clients.

More references:

Marketing strategies at Hilton, Ryanair,Lego

Ikea, HSBC, Ryanair: Everything that matters this morning

Marketing Lessons from Ryanair — Customer Engagement


REST and JAVA exercise

REST questions

Retrieve a resource

GET /users/1234

Creating a resource with POST

POST /users

The client also sends a representation of the new object to be created.

Updating a resource with PUT

PUT /users/1234

List all resources

GET /users

ResourceSkapare class

  • Change members from public to private final
  • Change the constructor, It has too many ‘String’ as formal parameters. I suggest to use a mix of user created classes instead like Adress, PostCode etc.
  • add toString()
  • override equals() and hashCode(). This may depend on the semantic that the designer has put in designing the class and collaborating classes.
  • Array index should begin with 0
  • I think that the class does semantically two different things. So some methods should could removed and put in a different class . Or just removed.
  • Also depending on the semantic objectives a singleton could be made of this class.
  • method checkName() should be private (?). I do not remember the details of tis method. There could be other issues with it.

Static Lock (ReentrantLock) and SonarQube destroyed my day

Yes, the idea of letting a module pass through a SonarQube validation throw us in putting ‘static’ on a lot of fields. A static happen to be attached to a ReentrantLock …
And many wasted hours of debugging followed because the applications using this module became crazy and not correct.
This piece of code can destry your application:

Benchmark , Performance StringJoiner StringBuffer “+” plus operator

This is a benchmark for performance and behaviour of …..

The result is that

StringJoiner is always the worst solution

BufferedString does not help very much. It is perhaps that the method is small.

A simple solution with the + operator is the winner!!!



Java developer contractor freelance rate Stockholm and Sweden

For Stockholm the rate is

  • minimum 750 sek swedish crowns per hour.
  • everything under this rate is seen as suspicious from both contractors and buyers.
  • the reality is that 850 sek is the rate if the consultant has more than 3 years experience in all required niche technologies
  • 950 if the consultant has more than 5 years experience in all required niche technologies.

The rate should be increased by one per cent for each week of delay for the payment.

You obtain 4% more if you are paid four weeks netto.

For Malmö, Gothenborg and all of Sweden:

  • everything under 700 is seen as suspicious and gives a warning signal to both buyers and contractors.
  • 750 sek is the rate if the consultant has more than 3 years experience in all required niche technologies
  • 850 if the consultant has more than 5 years experience in all required niche technologies.
  • most government agencies has 850 sek per hour .

Information Service and Metadata for clouds. peer-to-peer compared to hierarchy database

Evaluation Of Information Service Architectures For Grids

M. Cianciulli1

1 Institute of Computer Science, KTH, Stockholm, Sweden



The purpose of this research is to evaluate a grid information system based on two different architectures: a hierarchy of MDS4 services and a flat peer-to-peer system based on Distributed Hashtables (DHT) and measure their precision. It takes the approach of pragmatically comparing the systems and experiments are conducted as regarding their ability to answer to queries when resources churn.

Resources and users in a grid are described and searched upon a set of two or more attributes. Locating resources in a grid is more complex than locating resources in a peer-to-peer system: the reason is that resources need to match multi-attribute range queries, i.e. queries that identify the resources characterized by a set of attributes whose values fall into given intervals. Peer-to-peer DHT systems support mostly exact queries for one search key only. The requirement that multi-attribute and range queries should be performed is therefore set and that the system must also be evaluated under churn. Conclusions are drawn about how the systems behave under these circumstances according to the metrics of precision. This work is closely related to the following two topics of CGW’10: monitoring and information management; distributed computing infrastructures.

  1. Introduction

Grids and Peer-to-peer systems are both distributed facilities for coordinated sharing of computing resources. They have very different requirements: grids are more secure and reliable. Grids are built for Virtual Organizations (VO) that aggregate resources and users. The Open grid Services Architecture (OGSA) specification lists the services needed to build a grid and they are 1) resource management 2) scheduling jobs 3) information services for metadata.

The focus of this work is on information systems for grids: evaluation of their ability to answer to queries when resources churn is done by experiments. The metrics used is that of precision. Information is usually made available through a centralized or distributed system where the distributed system can be designed in a hierarchical or flat fashion. Hierarchies require global knowledge to be able to work and are usually built as a tree. MDS4 (Globus Toolkit Monitoring and Discovery System) [1] is an available information system for grids that is organized in a hierarchy of data sources and allows for global knowledge within a grid built with the Globus Toolkit. Tree hierarchy is inefficient that is why flat systems like the DHT (Distributed Hash Table) [2] based ones emerged. The research is focused and narrowed to metadata about resources and users. The approach taken is that of pragmatically compare the systems. This approach could be not completely theoretically fair but it is important to compare technologies while they operate live.

These constraints and limitations apply: a fixed schema of information throughout the experiments is used. Flexible schema models for discovery of data in grid systems are out of the scope of this work.

  1. Related Work

Not any similar evaluation has been performed on the infrastructures analyzed in this work. Some related work is available: evaluations of performance of discovery within Grid systems, Peer-to-peer middleware for grid monitoring and discovery and some available DHT systems capable of range queries on multi-attributes. The latest were not created for Grid systems.

2.1 Evaluations of performance of discovery applied to grid systems

Work in [9] evaluate the performance of a hierarchical grid information system built as an MDS4 system as compared to their super peer model which is a not DHT based peer-to-peer system. The paper [10] is the only work that like ours conducts an evaluation of many platforms for grid information discovery. This paper is from 2003 and the platform being compared are: a relational one based on mySQL 4.0, Xindice 1.1 which is a native XML implementation, and MDS2 which is mainly based on LDAP. No churn behavior is modeled. The metrics used is that of Query Response Time. Authors of [16] study the performance of the Globus Toolkit Monitoring and Discovery Service (MDS2), the European Data Grid Relational Grid Monitoring Architecture (R-GMA), and Hawkeye.

None of the mentioned papers takes in account churning behavior.

2.2 Peer-to-peer middleware for grid monitoring and discovery

[5] proposes an improved resource discovery and monitoring system based on the Pastry DHT, specifically for grid computing environments. The proposed system is composed of multiple Pastry layers. Each layer is composed of resource-bearing nodes which contribute one particular resource attribute, such as CPU or disk, over a specific threshold, to the layer. It is capable of multi attribute searches but has a very primitive support for range queries. Only values being “<” than, or “>” a designated value may be retrieved. Papers in [4] and [8] describe usage of unstructured peer-to-peer systems within grid systems.

2.3 DHT tools capable of range queries on multi-attributes

MAAN [7] is built on Chord and its implementation is not available for use. It should provide both multi-attribute and range queries. A separate DHT layer is kept for each attribute. One DHT lookup for each attribute is performed and the results of each sub-query are intersected to filter out the result. [3] report some deeper evaluation of the locality preserving hash function used in MAAN as regarding also the balance load. [11] use also a Space Filling Curve as well to achieve range queries over a DHT but they map a d-dimensional space to a one-dimensional index. Such a construction gives the ability to search across multiple attributes. Xenosearch [15] is not available for use and uses an extension of the Pastry DHT: one Pastry layer for each attribute type is used. The work [13] is a recommended reading for a deep analysis of this subject.

  1. The Experiment

A simple discrete-event simulation approach is used that is not dependent on external data instead of a trace-driven discrete-event simulation program.

With this fundamental assumption about the model: a two-state failure model is used which alternates between the recovered and the failed state. Another model that is used in research is a three-state model where these states occur in sequence: recovered, failed but not visible, failed.

These stochastic assumptions apply:

Failure rate is the average number failures per unit of time per node. It follows the exponential distribution. = 1/TTF.

Repair rate is the average number of repairs per unit of time. It follows the exponential distribution. = 1/TTR.

The experiment uses two recovery models. In the first model a correlation between TTF and TTR exists, TTR depends on TTF; so ρ is fixed for the experiments and then a TTR value is derived from it for each TTF that is experiment with. The second model has a fixed TTR for all the experiments.
Failure and repair rate are the same on average for each node.

That is: i = i>0 ; i =  i>0.

Query rate is the number of queries per unit of time. It has a uniform distribution and is constant throughout the experiment.
The intensity relation between and : . The number of nodes and number of clients performing queries is kept constant throughout each experiment. Nodes churn while clients that issue queries do not churn.
Each node has its own repair facility or team.

Schema of the data sets which are queried: The schema of the data describing CPU processors resources and the schema of the data describing membership to a VO is in the following table. The last is similar to that described in the referenced paper [14]

Attribute Unit Type Distri-bution Avg. Range Attribute Unit Type Distri-bution Avg. Range
CPU MHz int Skewed 2000 Group ID int uniform NA 1-20
RAM MB int uniform 100-300 Role ID int uniform NA 1-20
Storage GB int uniform 400-1000 Capability ID int uniform NA 1-20

Table 1 Schema of the data set for CPU processor data and of the data set for membership data

Size of the experiment: The aim is to imitate so much as possible the size of EGEE II grid. The last figures from [6] tell about 17.000 users spread among 162 registered VOs with about 20000 computation units. The number of VO is rounded up to 250 and the size of 25000 for both the available computational units and the number of users is chosen.

3.1 Results of the experiment

This is the summary about the parameters of the experiment and the configuration which is common for all the following experiments:

Parameters Configuration
Object: perform multi-attribute and range query to retrieve records. With churn. Number of nodes: 250.

Dimensions: 3.

Searchable keys stored in the system: # 25000 (for CPU data and for User data sets).

Items: saved data records (about membership or CPUs). Range/selectivity of queries performed: it covers the whole ranges.
QOS and metrics of discovery: precision, how many false negative are got. Clients: one client performs a query every 5 seconds towards the network of nodes.

Each point in the pictures represents a run of one experiment for the given [ρ, time length, type of network, TTF].

MTTR is calculated from ρ as from the formula for intensity that was given at the beginning of this chapter.

All these experiments were run with a ρ of 0.1 while the λ (= 1/MTTF) varies on values of 0.016; 0.002; 0.025; 0.033; 0.05; 0.1; and 0.5 minutes. The values of λ placed on the x-axis of the diagrams are calculated in the table “Calculation of .


node is alive

in minutes

60 50 40 30 20 10 5

node is stopped

in minutes

0.1*60=6 0.1 0.1 0.1 0.1 0.1 0.1

in minutes

0.016 0.02 0.025 0.03 0.05 0.1 0.5

Table 2 Calculation of = (1/MTTF). Values of MTTF and MTTR of experiments when ρ is 0.1

      1. Results of experiment with TTF and TTR skewed, ρ = 0.1, 250 nodes, 120 minutes on CPU data

The diagram here below summarizes these results.


Figure 1 – Results: CPU data, ρ 0.1, TTR TTF skewed

MDS4 is almost always better then the chord implementation apart when the TTF is at its shortest limit and when TTF is 0.1. The negative rate of the Chord implementation seems to ‘explode’ upwards when the TTF is shortest. The Chord implementation has also a larger dispersion measured as Standard deviation as pictured here below. The largest part of the information that it is known to be in the system is anyway not returned back from queries and this happens both for MDS4 and the Chord based implementation. A TTF shorter than five minutes is not used as MDS4 does not seem to manage very short intervals. Such short intervals of churn are perhaps not relevant when the scope of the experiment is a science grid while it would be relevant in a desktop grid or in an internet-wide general facility.


Figure 2 – Std.Dev. MDS4 and Chord-multi

3.1.2 Results of experiment with TTF skewed and TTR fixed, ρ = 0.1, 250 nodes, 120 minutes on CPU data

A different recovery model is used: the TTR is fixed now while the TTR still varies. The diagram here below summarizes these experiments.


Figure 3 – Results: CPU data ρ 0.1, TTR skewed TTF fixed

3.1.3 Results of experiment with TTF skewed and TTR fixed, ρ = 0.1, 250 nodes, 120 minutes on Membership data

The DHT query is issued on the attribute ‘Group’ which is uniformly distributed in this case. Please refer to the previous table which describes the schema for User data. A recovery model with a fixed TTR while the TTR still varies is still used.

The diagram here below summarizes these experiments.

Figure 4 – Results: User data, ρ 0.1, TTR skewed TTF fixed


It is observed a slight improvement in the precision of the chord implementation which gets better scores then the MDS4. MDS4 is always better from a TTF equal and greater then 0.050.

  1. Conclusions and future work

The experiments show that the MDS4 and the Chord based system behave differently under this model of churn and under the assumptions that were made for the data sets. MDS4 presents a better precision in general but the difference is not that big. The precision rate that both systems deliver is not particularly satisfying. Large part of the information that it is known being available is never retrieved by these systems. On the positive side is that this means that there is still a lot of work of optimization that could be made.

What would happen if the number of searched attributes is increased from three to five? And what if the number of nodes is taken up to 10.000?

The Chord based system presents a much higher level of dispersion in delivering its precision. Can this be accepted or is it a sign of optimizations that must be improved? This system is also the one that has more possibilities to be optimized and improved.

    1. Future work

I suggest a deeper analysis about the composition of queries, the modelling of data, the dynamic behaviour of nodes in the system and other metrics to be used.

Query composition: Processing of queries could be optimized for some or all the subsystems that were evaluated. It should be decided how fair this is as well. Experiments could be run by using different query widths.

Modelling data: Scarce sources of information and studies are available about the dynamics and statistical distributions that govern membership, the data about users of a grid. This is a severe shortcoming for simulating a grid environment. This is also a weakness for the design of future grid systems as this data could be used as basis for the improvement of features like fault tolerance. Data about the distribution of CPUs is not available.

Dynamic behaviour: I hope also that more and more data and studies about resource unavailability will be available. This will make future experiments on discovery of resource more consistent and more useful as input to design of and management of discovery systems. One more model for recovery could be used where say one tenth or more of the nodes always remain connected. Also as [12] suggests it would be interesting that somebody study how system design choices depend on failure characteristics. This was out of the scope of this work.

Other metrics: The same experiments could be run to measure the service time to queries.


  1. J. Schopf, I. Raicu, L. Pearlman, et al., Monitoring and discovery in a web service framework: functionality and performance of Globus Toolkit MDS4.
  2. I. Stoica, R. Morris, et al., Chord: a scalable peer-to-peer lookup service for Internet applications, Proceedings of ACM SIGCOMM 2001.
  3. A.R. Bharambe, M. Agrawal, S. Seshan, Mercury: Supporting Scalable Multi-Attribute Range Queries, Proc. ACM SIGCOMM 2004, pp. 353-366, 2004.
  4. S. Bharathi, A. Chervenak, Design of a Scalable Peer-to-Peer Information System Using the GT4 Index Service.
  5. Ian Chang-Yen, D. Smith Nian-Feng Tzeng, Structured Peer-to-Peer Resource Discovery for Computational Grids, University of Louisiana at Lafayette.
  6. .
  7. M. Cai, M. Frank, J. Chen, P. Szekely, Maan: a multi-attribute addressable network for grid information, 2004.
  8. M. Marzolla,M. Mordacchini,S. Orlando, Peer-to-peer systems for discovering resources in a dynamic grid, Parallel Computing 33 (2007) 339–358.
  9. C. Mastroianni, D. Talia and O. Verta, Evaluating Resource Discovery Protocols for Hierarchical and Super-Peer grid Information Systems, 15th EUROMICRO International Conference on Parallel, Distributed and Network-Based Processing (PDP’07).
  10. Beth Plale, Resource Information Management in grid Middleware: Evaluation of Multiple Platforms with a Benchmark/Workload, Indiana University.
  11. C. Schmidt, M. Parashar, Enabling flexible queries with guarantees in P2P systems, IEEE Internet Computing 2004.
  12. B. Schroeder, G. A. Gibson, A large-scale study of failures in high-performance computing systems, International Conference on Dependable Systems and Networks (DSN 2006), pages 249-258, 2006.
  13. P.Trunfio, D.Talia, et al., Peer-to-Peer Models for Resource Discovery on grids, CoreGRID Technical Report Number TR-0028, 2006
  14. R.Alfieri, R. Cecchini et al., From gridmap-file to VOMS: managing authorization in a grid environment, Future Generation Comp. Syst. , Vol. 21 , Nr. 4 (2005)
  15. D. Spence, T. Harris, Xenosearch: Distributed resource discovery in the xenoserver open platform, HPDC-12: Symposium on High Performance Distributed Computing, IEEE, 2003.
  16. X. Zhang, J. L. Freschl, J. M. Schopf, Scalability analysis of three monitoring and information systems: MDS2, R-GMA, and Hawkeye, 2005-2007.

REST and CQRS Command Query Responsibility Segregation

CQRS stands for Command Query Responsibility Segregation.
REST goes well with CQRS when we use the paradigm of “REST without PUT”.
The idea is that we no longer PUT the “new” state of an entity, instead we make our mutations be first class  citizen nouns (rather than verbs), and POST them.

REST without PUT has a side-benefit of separating command and query interfaces (CQRS)  and forces consumers to allow for eventual consistency.
We POST command entities to one endpoint (the “C” of CQRS) and GET a model entity from another endpoint  (the “Q”).

JMH tips

JMH is a good tool for benchmarking your code.

The following tips will help you to save time in an otherwise not easy et up.

  • use state and scope at class level like :
    public class FileParser {
  • No benchmarks to run message? check the include/exclude regexps.
  • methods annotated with @Benchmark should be public.

Automatically generate java backend from existing database

I am working a lot with databases , hibernate , restful apis in Java SE / java EE with HTML clients.
I am pleased to suggest this workflow that relies on Netbeans automatic generation capabilities.

1. Netbeans generates JPA entities from an existing database.

As it is described at Dzone and at Oracle site.
You have also the option of generating JAXB annotations directly in the JPA entity classes.

2. Netbeans generates then restful api from the entities.

As it is described at NetBeans Help site.

It is really a breeze to come up fast with a Java backend.
I am working now on the HTML / javascript/ jquery client and I will let you know.

Generate JPA entities from database

with Netbeans

with Eclipse

Eclipse Neon Help: Generate entities from tables

Create JPA project

with JBoss

with MyEclipse

Organize your week

How To Pomodoro Your Way To Productivity – Trello Blog

A tomato online timer and a Trello board can give a great lift to your personal work productivity .
The name Pomodoro derives from the tomato shaped kitchen timer  that Francesco Cirillo used. He is the founder of the movement.

Tomato with trello: How To Pomodoro Your Way To Productivity

Pomello – Chrome Web Store – Google

Pomello turns your Trello cards into Pomodoro® tasks. This little timer is packed with features to helps you stay focused and on track to achieve your day-to-day …

It professions, no programming required

The IT world is completely changed, new professions have emerged in the last 10 years. Other already existing ones have reached a new good maturity level.

  • Graphic design: many designers have no programming knowledge whatsoever. If you’re interested in both design and coding, you’ll be happy to know that there’s a programming language for designers called Processing.
  • User Experience (UX).
  • User Interface (UI) Specialist
  • Gaming technical artist
  • Localization QA Coordinator: controls the linguistic and localisation quality for games or other applications. Ensures a perfect game experience in all or one supported languages. Coordinates linguistic testing.
  • Technology outreach, Mentor, Evangelist
  • Technical writing: Programs, websites, scripts, and nearly every other type of product need extensive documentation. It can be instructions for users, requirements for developers, press releases, technical reports, specifications, or a wide range of other types of documents.
  • Growth hacker
  • Content Marketing Manager. Also creating content (e.g., a video) and campaigns that stir the market into a frenzy.
  • Partnership manager
  • Recruitment fellow
  • Teacher
  • Networked Storage Technologies (NAS, SAN, etc.)
  • Enterprise Resource Planning software (especially SAP)
  • Network Convergence Technologies (Voice, Video, Data & high-speed QoS infrastructures)
  • Data Analyst – Monetization
  • Game Economics Manager
  • Customers support
  • Community manager
  • Agile tools are good to posses even if you do not code. A web developer can with prophit use a source code version tool for many purposes.

Here you have a toolset that may help you in the choice.


These are career paths for you with a background in selling, retails, shops, sales, logistics for retails, buyer:

  • Game Economics Manager
  • Business Development / Sales: Masters the acquisition of new media partners and websites for companies applications/games. Gains in-depth insight into the development and maintenance of global contacts relevant to sales. Negotiates conditions and framework agreements with partners.
  • Marketing and sales
    In the tech world apart from many other fields is that companies are often in tune with up-and-coming methods of marketing and advertising, and this can be appealing to many people who want to work in tech without programming. For example, search engine optimization, search engine marketing, pay-per-click advertising, content marketing, web production, and social media marketing are all important fields that are relatively new within marketing and advertising that tech companies are likely to be hiring for. Some of them require more technical knowledge than others, but they all benefit from having a good understanding of the technology that the company is selling.

These are career paths for you with a strong background in coding if you do not want to code any more:

  • Product/Program Manager.
  • Project Manager.
  • QA / Testing (good testers are worth their weight in gold).
  • Build Engineering (this stuff is hard).
  • System Administrator.
  • Technical Sales.
  • Technical Writer.
  • Business Analyst / Programming Analyst

Jboss tools 6.4 with eclipse

I found the documentation surrounding this to be incomplete and confusing.

You need to keep an eye on

  • which version of Eclipse you are using
  • which version of Java you have
  • which is the site Jboss where the tools reside

Jboss tools for Eclipse Luna and for Jboss EPS 6.x work on java 7 .

Do not use Jboss installation sites url or Eclipse marketplace. Use the install new software in Eclipse instead. Insert this URL “”


Comparison of Automatic REST API Code Generation Tool

I spent some time trying to find a proper tool that is able to generate tidy Java code for REST resources.

My requirements are:

  • The Java code must be readable and not polluted with a lot of imports that refer to the tool maker .
  • The input should be short and simple.

I stopped by two tools as they were the only ones that would comply with my requirements. They are:

  • Restunited
  • Swaggerhub

Swaggerhub wins.

The reason is that a yaml definition file is used as input to Swaggerhub. Also the output consisted of tidy and simple java classes for the server implementations based on jax-rs.

Here it is a yaml specification file you can begin with at Swaggerhub:

RFC: Is Apache Kafka a message queue?

The classifical definition of a message queue says that:
Message queue systems guarantee that
– messages will be delivered
– and delivered once . That is no messages are delivered twice (integrity).

Kafka is an at-least-once message queue

We ask for your comments about this!!

A REST and websocket programmers daily log

We want to share issues and annoyances that made our trip slower.


Good to remember that it exists.

JSON Parsing

It is possible to use Jackson with no JAXB as for these examples
and this specification


The MessageBodyProviderNotFoundException has to do with JSON deserialization.
It is the sympton and happens when Jersey client deserializea a map or collections when readEntity fails

Test With ObjectMapper

public void testGetIt() throws Exception {
String responseMsg = target.path(“apipath”).request().get(String.class);
ObjectMapper mapper = new ObjectMapper();
MyList books = mapper.readValue(
responseMsg, MyListAnswer.class);


Jersey version 2 servlet

If you want to use jersey version 2 you need to replace the servlet definition with the following:


Cannot call sendError() after the response has been committed

Solution: return after sendError() to avoid chain.doFilter(request, response); to be called

How throttle servlet requests-Solution

Use a  web filter.

ClassNotFoundException: org.glassfish.jersey.servlet.ServletContainer

Solution: .war file is incomplete. Make it again and check that it contains all libraries.

Setting property ‘source’ to ‘org.eclipse.jst.jee.server:appname’ did not find a matching property

You can change the eclipse tomcat server configuration. Open the server view, double click on you server to open server configuration. Then click to activate “Publish module contents to separate XML files”. Finally, restart your server, the message must disappear.

At least one JAR was scanned for TLDs yet contained no TLDs


Charting, Rickshaw, JavaScript, Servlet, JSON

set “Keep-alive” to … with JAX-RS Response.header() .


Client javascript parsing of Json

Use JSON.parse in onMessage(), or everything will be caos 🙂

function onMessage(event) {
  var json = JSON.parse(;

Some help


Update repository , and mirrors

Install custom jars


Setting up kdiff3 as the Default Merge Tool for git on Windows

Edit .gitconfig , it is usually in your user directory. There may exist other .gitconfig files, pay attention please!

tool = kdiff3
[mergetool “kdiff3”]
path = “D:/Progra~1″/KDiff3/kdiff3.exe
cmd = “D:\\Progra~1\\KDiff3\\kdiff3.exe” $BASE $LOCAL $REMOTE -o $MERGED
cmdOK = “D:/Progra~1″/KDiff3/kdiff3.exe
keepBackup = false
trustExitCode = false


Message loss and Apache Kafka

There exists only one way to avoid message loss: use the tools that banks use.
They loose the money and feel the pain if a message dies and are seriuos about this. If you want kafka with no message loss so you need to :

  • implement your own protection layer around Kafka
  • monitor everything and everywhere.

Monitor is a beast to do operations fast. The SLAs are out of its possibility.

Kafka documentation.
We find that there is a lot of unconsistency. The same concept is exposed more than one time in a different way at different places. Some descriptions are just impossible to understand. LinkedIn guys are good at creating a huge cloud of blogs , benchmarks and assertions that no one can check. They also assert that it fulfils all the three requirements of CAP theorem. And are completely confused where they explain this. It tries to be many things contemporary. No university department has made any research trying to classify and find its nature just because it is a bag containing many things. There is plenty of performance benchmark instead which are done in friendly environments where all the nodes, consumers, zookeeper , producers, sinks stay well and never fail. The benchmarks are done from Linkedin girls or are based on their assertions.

Related Posts Plugin for WordPress, Blogger...