Initially, programming happened in the cumbersome language of binary, a series of 0s and 1s. Those programs only ran on a specific hardware type, it was difficult for a person to understand the codes built of several tiny elements, so during the beginning years it was easy to make mistakes while programming and if you had to modify an existing program there were a quite a few risks involved.

Then in the 50s, the first high-level programming languages (Fortran, Lisp, and Algol) were produced. On the one hand, these languages were detached from the hardware and thus codes became more similar to the English language. On the other hand, their abstractions enabled the program to be more clearly, intelligibly and transparently formulated. Programs written in high-level languages could be translated into the low-level language (machine code) of different kinds of hardware.

Later, in the early 70s, C was released, followed by C ++ a decade later. They soon became the most popular programming languages.

C was developed by Dennis Ritchie who rewrote the Unix operating system using this language. Object-oriented C ++ was Bjarne Stroustrup’s merit. In Object orientated programming (OOP) we describe the relevant properties and internal logic of existing things, processes, and events for a particular application in the “classes” of the appropriate language. During the application runtime, copies of these classes called “objects”, interact with each other, modeling the operation of a part of reality in quite a lifelike way. C and C ++ codes could be translated onto different hardware and operating systems and so could be run on them, but you needed the source code for the translation and in many cases, it needed to be adjusted to the target runtime environment.



Another decade had passed and Sun Microsystems had created a new language and also a layer between the language and the hardware plus operating system base. This new intermediate layer became the Java Virtual Machine (JVM), which enabled a program written or inverted in any environment to run in any other environment without any modification or recompiling. The oriented language of the new object compiling to JVM was named Java.

Sun then dusted an old idea off to incorporate it into JVM, this was the “garbage collection”.

It meant that programs could work with data for which memory areas were reserved. After use, these would be freed up again otherwise there would be no free memory after some time and the program would fail, and send an error message. The release of non-used memory pieces was often forgotten by the developer, so a “memory leak” was a rather frequent problem. When Sun with JVM brought out the automatic memory release, the garbage collection, a whole profession started to breathe easier.


Java became the world’s most popular programming language in no time as it seemed to be a much more simplified version of C ++. Java ecosystem has since grown enormously and is ready to offer a solution for almost everything.

Yet, no matter how capable it seemed to be, Java was not able to rule all existing platforms. It became popular mostly in international corporate servers, locked away from the outside world and rich in resources (CPUs, memory). Moreover, it could not conquer the terrain of browser security which is a security-sensitive area. Meanwhile, global systems like Amazon, Google and Facebook were developed, which, despite of their abundant computing capacity, could not have served the world with Java-based software. With the last big waves of information technology, smarter and smarter mobile phones have arrived and so did IoT, the Internet of things. These tools typically don’t have high resources, so this terrain was beneficial for languages that compile to machine language. So, time has passed over Java.

In around 2008 JetBrains, a Russian software development tool builder team, was looking for a solution to this situation, so they could remedy their own Java-based development difficulties. First, they examined other existing languages, but none of them proved to be completely satisfactory. The team from St. Petersburg wanted a pragmatic, unambiguous, concise code that would save them from commonplace programming mistakes. They summed up all the features they desired in a language and they started to develop their own programming language that they named after Kotlin Island, an island close to St. Petersburg. Kotlin now not only runs in JVM along with the existing Java code but also in browsers and soon it is going to run on small embedded hardware as well.

Although the world noticed it slowly, this summer Google announced that from this year Kotlin along with Java, would be an Android supported language. This means that the new language will spread quick as a flash on the largest mobile platform in the world. In the era of the Internet of Things we will also be able to run the Kotlin code on small embedded hardware, what’s more, JetBrains want Kotlin to conquer browsers next.

Why is all of this good for customers? Currently a “full stack” (frontend, backend, and mobile) developer needs to know several languages but they usually only know one really well. With Kotlin, it will be soon possible to program faster and more safely in each layer (frontend, backend, and mobile) and when you have an existing Java project you won’t have to translate it all back into Kotlin, it can be done part by part, or you can easily modify it by attaching new parts written in Kotlin. Maintenance is also simplified with Kotlin. If the developer acquires an existing code they can easily interpret it and they can modify or expand it more safely. Overall, I think Kotlin-based development will be faster and safer, and Kotlin-based systems will be manageable and improvable and more efficiently than ever before.

It will be a great choice for our customers.


Do you like shopping? For some it may be a chore but the advent of online shopping does make life easier, especially for the savvy shopper and not only in everyday life, but also when Christmas is coming. One can barely think of a gift idea which would not be available within a few clicks.

Have you ever wondered, from an IT perspective, what level of expertise is required for this? It is our natural expectation that web shops run smoothly during promotions or in busy shopping periods, but just a few years ago that a half-day online promotion could quite possibly crash the entire web site.

Today, this issue has become relatively easy to solve.

Professionals have found the answer to the problem of how to make the performance of computer systems load adaptively and dynamically, with the least possible human interaction. They have long ago given up the concept of building a giant, powerful computer to achieve optimal performance – which is called vertical scaling – because after a while this machine’s capacity became limited and it just was not worth it economically. Instead, they combine lots of everyday PCs’ performance to optimise resource usage. This latter solution, which is called horizontal scaling, is cheaper and more flexible.

This method can be utilised by containerisation technology, too.

This means that different applications can be handled simultaneously, in packages. They are delivered in containers and the containers themselves are run by a platform created specifically for this task. The developer inserts the program they made into a container: the cluster built of many small machines, which is an ’infinitely’ scalable resource set and only has to be prepared to run containers.

Why is containerisation revolutionary? It takes several minutes for a traditional virtualisation to launch a virtual machine, by using containerisation it is a question of milliseconds. Let’s imagine the resource demands say at a Christmas promotion launch, when the webshop is flooded by more than 100,000 users who all start shopping. In the times of traditional old tools, the system administrator got a red light, so they began to plug in new machines. This required waiting. Visitors got annoyed at error messages. But with the help of containerisation, today customers are not experiencing any of this. The whole process is automated. A sensor monitors the workload and as soon as it reaches the threshold, new sources are mobilised. Without any human intervention, the system is adapting itself to resource requirements.

A few years ago, Google made some open source software from a potential support tool for containerisation and horizontal scaling. Thousands of programmers contributed to its development with their pro bono work.

It is called Kubernetes.

From the Greek word for “navigator” from which the word ’cybernetics’ also comes. In ancient times, sailors needed to be bold and resourceful. Travelling across the seas without maps or navigating tools, independent thinking was a constant need for them. It is likely that the same goes for the professionals at Google, who a few years ago created this development for accelerating operations and simplifying container management tool. By using it, software developers need less time to manage infrastructure, while they have more time to develop applications.

And this is also necessary.

Today, more and more web shops announce promotions that only run for a few hours where their products can be bought for an incredibly low price. On these occasions the traffic may be vastly increased for a short time. Whether it was the availability of the technology that prompted these campaigns or the demand from customers that made the developers work harder is unknown, but it is a fact that today providing this kind of technology for such shopping sprees not a problem anymore.

Meanwhile in the background, the developer can make Kubernetes run from a certain application, say, in at least three to ten instances, depending on the traffic. The system then will automatically decide that out of a pool of 100 machines which ones will run the applications.

In the meantime, the developers are catching their breath. Which they will use as an opportunity.

Many of them will grab the occasion to improve tools like that.


Have you ever seen a rugby game? Then you surely have seen how the players face each other, their heads interlocked, they cling together and push forward with all their might. They are trying to gain possession of the ball thrown in from the side. In English this is called the ‘scrum’.

One of the most popular methodologies of the agile method was given the same name, presumably suggesting that the (developer) team members are fighting for success shoulder to shoulder. Everyone has to work together – and that’s what provides the benefits of scrum: the continuous conversations, iterations, help customers to review and modify the project every few weeks. They can see the results not just when the project ends, but throughout its development.

However, despite the obvious advantages of the scrum method, many companies want to see if their budget can be reduced in the process due to the reviews they see.

How costs can be controlled.

A given specification – which is called the backlog (BL) by the agile method – is made-up of subtasks i.e. stories. It is important that these stories are short and straightforward, thus, developers will have a great deal of freedom in its execution. The time requirements of the stories are estimated by the members of the 5-8-member team, and based on this, story points are ordered into subtasks. Cost estimation can also be based on these.

Let’s look at an example of a webshop. BL notes that the website should include all the features the client wants. It should have an administrative interface where you can upload the products, and a customer interface too. It should allow the customers to pay and to decide how they would like to receive the product. Here, registration is the first story, login is the second, the purchase is the third, and so on. The team estimates – in a thoroughly regulated backlog refinement meeting –  the worth of each function in story points, then they make a calculation for an amount per point. Registration is relatively simple, it is typically worth two points. The price per point is therefore doubled, let’s say it’s two hundred thousand forints. Maybe the customer wants a cheaper solution – it is possible with this methodology, unlike with the Waterfall model. For example, the client can decide that they allow purchasing in their webshop without registration, or they can decide there is really no need for registration at all. In this case, the team re-estimates the number of story points therefore changing the cost.

The customer has already saved an unnecessary function and possibly even a fair amount of money.

In scrum methodology everything, including roles, is precisely defined. The Product Owner (PO) writes stories with the customer. He is the “master” of BL. He connects the team and the customer. However, it’s not him, but the team who decides how many tasks they can complete in the next few weeks. The project approach is based on commitment and responsibility.

The scram master knows the methodology thoroughly. They are the one who is responsible for the fulfilment of processes and commitments. They also carry out project management tasks: they are responsible for the team’s cooperation, they deal with problems that arise. They run the daily 15-minute stand-up meetings precisely regulated by the scrum guide, where team members discuss what they did the day before, what tasks they expect to do that day, and what obstacles that might have arisen. These three topics should be discussed every morning by team members.

The guide also makes retrospective discussions mandatory: at two-week meetings, developers look back on the work done in the past iteration, draw lessons, and lay out further directions of development.

The client’s only requirement is to sit in a room once every a few weeks and see how the development is going. If it is necessary, they can intervene.

This used to be unthinkable before.

In my experience, customers are demanding more and more transparency, and it is important to them to have a say. The agile contract includes unit costs, and the development team makes sure that they always prepare what’s important for the client. At fortnightly meetings, they can keep track of where the work is going, or sometimes they can meet the developers – whilst earlier they could only contact them indirectly through the project manager.

These days, a whole industry has been built on teaching scrum. You can attend training sessions and workshops to learn how to put the method into practice. However, the essence of scrum can be summed up in three words: flexibility, commitment and real teamwork.

Just like in rugby games.


Let’s imagine a huge, complex project. The development team has been working for a year and a half, they have stuck to the contract word for word. The software has been delivered, the deadline was kept, and they are certain they have acted exactly in accordance with the customer’s requests.

Except the anticipated ovation is missing. The registration interface is unsuccessful as identification is QR code-based but the client didn’t specify this in the contract and wants a bar code instead. The signatories on the client’s behalf have never dreamt that any other solution could possibly exist. The project was completed by the target deadline, but both sides were disappointed. A lot of effort has been put into the project but the expected results have not been achieved.

Is this story familiar? Most of the experts working in the software development world have already had some sort of similar experience.

The recurring problem is the use of the Waterfall model.

winston royce waterfall

Described in a paper by Winston W. Royce as a project that continues thought a linear series of events each completed in a strict order with precise deadlines and budget, requiring teams of permanent members and minimal interaction between the client and the IT team during development. Briefly and subtly, these are the characteristics of the Waterfall model.

And even though Royce saw clearly and pointed out the deficiencies of the method in the seventies, the software industry just ignored these warnings and limitations. Lengthy preparation and extensive documentation made us falsely feel that everything was well-prepared. We were well aware of everything, we could see exactly what tasks had to be performed. But the truth was, we were just admiring a mirage.

Of course, there are some industries where it is not possible to work with alternative ways of thinking or project management methods. Public companies, public procurement and the banking sector are typically like that. No excuses can be given, what and when the developers deliver must be precisely clarified not to mention the cost, too. Due to hierarchical organisation structures and the regulation of tenders all of those involved have their hands tied, and this is not only true nationally.

However, in business life, in product development and in more “horizontal” and decentralised organisations there are only a few types of projects that justify the use of the Waterfall model while another option might exist.

And indeed, it does exist.

In 1986, two Japanese organisational research specialists proposed a number of innovations to overcome the disadvantages of the model: they called for self-organising, flexible teams and stressed the importance of continuous learning within organisations.

Then, by the second half of the 90s, these proposals were already being successfully used in practice by several software development pioneers, such as by the creator of Extreme Programming, Kent Beck. It was then that Jeff Sutherland and Ken Schwaber began to systematise the already introduced practical elements – from which Scrum as we know it was born. They believed that you should respond flexibly to customer needs. They then later collaborated with 15 other software development gurus in 2001 to create the Agile Manifesto.


It is extremely rare that there are no historical-political reasons behind such an action, only professional and business motives.

Like an oasis in the desert, the Agile Manifesto is an exception to this rule.

Its most important that recommendations and concerns are acknowledged and close co-operation with the customer, effective communication and openness to change are encouraged. The agile methodology is based on handling software development issues, so its key element is flexibility. Development is divided into iterations built upon one another, and each iteration is preceded, accompanied and followed by reconciliation with the client. With this method a thorough discussion takes place during laying out the basics, but there are opportunities to change, modify and fine-tune the plan on the move during each 1-3-week phase. It is not uncommon that the client itself asks to change their original concept. Another important element is the retrospective of development iterations – an overview of the process which helps to realise what should be improved or done differently next time to make the process more effective or digestible for the parties involved.

There were several companies that soon recognised the power of agility, plenty of others have gradually shifted to the model in recent years. There are some, of course, to whom this approach still remains alien.

There are typically two reasons behind this aversion. Firstly, some find it difficult to handle the fact that you can not tie a strictly fixed deadline and resource requirement to a project. Secondly, it could be discouraging to realise that continuous consultations require resources from the customer too.

In other words, the client has to deal with the current IT project from time to time to approve it, or if necessary, change its course. This requires attention and energy. In addition, not only does the development have to be agile, but involves other organisations too, which means a flexible company, quick decision making and effective communication. Again, these are not attributes that everyone has.

By contrast, there is another factor to consider.

And it includes a precise result, an IT solution dreamt by the client. With minimum margin of error, the customer gets exactly what they expect. There is no waste of money. This is the case when the old saying “good work needs the time” is one hundred percent true.

Overall, I think about those organisations that have already been “burnt once”, their situation is ripe for change and there is a sense of urgency they would probably never want to return to the Waterfall era once experiencing the Agile Method. It is no coincidence that the agile methodology is far more common and widespread in the West than here nationally.

Although the transition is never painless, my experience shows that dedicated work always comes into fruition.

And some well-deserved ovation.


Imagine this: The latest development is a failure and there is only one night left until going live because the next day is the statutory deadline for the switch. Meanwhile, the developer cannot be reached. This would never happen with Docker technology.

In commonly used systems, when developers create software or an application like a webshop or a new function for a banking back-end system, the program will be passed on to operators who then follow the guidelines and execute the installation on another server.

Operators and developers work independently: the software is tested by the developers on their own machines and then reproduced by the operators. For instance, a mail-sending system will be set to send mails through the test mail server. Then later, when switching to the “real” mail server, the whole process needs to be repeated.

In many industries, the scope of these duties is separated by strict rules. For example, in the financial sector it is especially important that developers do not have access to live systems. The two teams often communicate only through descriptions because of the geographical distance and differences in work shifts.docker-facebook

This, however, can cause complications due to inaccurate or not up to date descriptions, not to mention any last-minute changes that may have been inadvertently left out from the text.

But there is a solution for this problem: a new platform called Docker.

Essentially, developmental steps are not recorded on paper or shared documents, but in a file, that can be read by computers. Thus, the operator simply has to run the application and they require less knowledge of the internal structure. The running copy is called the Docker Container.

Files that create the environment are called Dockerfiles.

It is also possible to make the updated versions of these files available in the version management system along with the source codes, so a restoration of an older complete system is easier. Files only describe commands, sensitive data (such as database passwords) is provided by operators in a separate location.

So, there is no risk. From development to live system, everything runs through Docker. This means that if a potential bug was detected by the developer then the software would not work for them.


How does this all work in practice?

Let’s stick to the mail example.

Let’s say we’ve written a webshop program that sends customers a letter. To run this application, you need a running environment, for which we define a Docker environment.

In the first line of the file we describe what kind of operating system to boot, and on the following line we specify which packages to install. We also specify that other settings on the server are still required, such as defining databases and external connections. Some of the settings can be adjusted with environment variables related to use.

What is the operator’s job? When he receives the above specifications, he just replaces the address of the test email server with the live system’s server and so the environment is created where the application can run from.


What does this mean for customers?

First and foremost, speed and accuracy. Endless hours are no longer spent seeking out errors or omissions in text based documents, mistakes which are usually only discovered on the very last day before a system goes live.

This solution is much simpler from the side of operations too. There is no need for accurate knowledge of servers and applications as before. Just as using a container was a big breakthrough for the transportation of goods – different sizes or types of packages ceased to be a problem anymore – Docker provides a similar benefit, you can manage different applications in a unified way.

Of course, this solution is not a cure-all. It is not worth setting it up for simple projects. Moreover, there are cases when using Dockerfiles are not even possible. Since the essence of this technology is to describe how to reset a server in a Linux environment, the Docker method can not be used if the solution that you have prepared cannot run on a Linux platform.

However, we definitely recommend it whenever it is complicated to create the environment.

It is also a good idea to use this technology for Java developments if the installation happens on an application server. Likewise, it is great to use Docker for micro-services too. In these cases, each service with a separate life cycle should be run as separate Docker containers.

The speciality of this solution is that although the initial phase of development and the integration of new infrastructure elements might take a little longer due to the continuous maintenance of the environment, this is negligible compared to the whole process. Overall, the time spent on communication and debugging before installation is much shorter.

What’s more, the technology and the software which Docker provides is free.

In the light of all this, I believe that it is time to end the era of descriptions shared in Word documents while running big, complicated projects.




“There were cases when after our client personally met some of our people that we’d selected for him reacted like this: now you just give me the names of the rest of them. There is no need for interviewing anymore, they are just right for us! ” says Gábor Privitzky, the Technical Director of BlackBelt Technology, proudly. He explains that the strict professional filter introduced by the company might be resource-intensive but is beneficial in the long run. It builds trust and closer business relationships among their partners. “If our consulting service customers see that we always delegate work to skilled and well-prepared professionals for joint projects, they will call us again.”


Like on Who Wants to be a Millionaire

Privitzky Gábor

Based on his many years of experience, the director of BlackBelt has insisted on taking the professional recruitment process seriously. In his former workplaces there were several cases where professional communication didn’t work well with some of his colleagues, which obviously affects efficiency. “There are companies where the selection goes only by CVs. Are you good at C++? Are you familiar with Linux? Awesome! You’re hired! The problem is that this way the filtering process is left to be done by the client. Which is not fair. We have to make the selection.”

Naturally, not only does technical knowledge count but professional aptitude and attitude too. How much the candidate would like to work here, and what their professional interests are. Privitzky stresses, they are testing how much attention the candidates have paid at the university. “Those who do not pass our interview stage usually say that it was like a college exam. Why, yes! That’s how I do an interview. Candidates often get algorithmic tasks. I often don’t even care about the solution itself, only the way the applicant thinks. Do they know how complex the algorithm in question is? Do they realize the exponential, polynomial, square, or linear nature of the problem? If they have learnt something similar at university, or if they were thinking about questions like that, then their answer will be good. It has happened of course that someone showed us not only his thinking, but he gave us a concrete solution. That person is working with us right now.”

On the other hand, it also matters how the candidate will handle a task that is too difficult for him. It is not good if they speak confidently but vaguely, but it is not good either if they just give up. Thinking loudly about how to approach the solution is much more advisable. “It’s a bit like playing Who Wants to Be a Millionaire? You can go around the topic or ask for help”, notes Gábor Privitzky.


Knowing it

Balázs Stasz has worked at BlackBelt for two months now after a successful job interview. Although his solution was not perfect, it was a good approach to the task. “I didn’t feel too overwhelmed in general. I think a lot of the answers to the questions asked would fit every developer” said the graduate student of the Budapest University of Technology and Economics who is currently a Java developer.

Based on the experience of higher education, Gábor Privitzky sees good training courses where students are preparing for the use of acquired knowledge and thinking instead of teaching to incorporate the details of technologies that are currently considered important.

“However, you also have to have language skills, it is extremely important if you want to work with us” says Privitzky.


Dizzy pace

Of course, no interview is without a professional filter that will have to go through the riff raff as well. Senior Developer Ákos Mocsanyi has recently selected the best candidate for DevOps Engineer (Developer and Operator). “I was curious about personality, operational experience, and how confident the candidate was. It was important not to get caught out when I asked at a point in the interview: write a program, and I’ll be watching it. The task was not too difficult but you’re interested in how the candidate handles an unknown machine and tools. Yet there were a few who were thrown by it. ”

Not like Levente László, who proved to be the best during the contest and started work a few weeks ago at the company. “At first, it was not very comfortable – on a ten-point scale I would say the experience was a five – it made the situation a bit unnerving, and I was even a little shaken. At first, people were nervous, as the environment was new, but then we were able to change a few things and solved the issues. The atmosphere here is nice. My experience is that when it comes to customer care and our interests BlackBelt are all about communication.

This is not a coincidence, of course. Knowledge gained in the IT sector five years ago is already obsolete, so strong professional interest is important. “Anyone who doesn’t keep up will not be able to fulfil their job within a few years,” says Ákos Mocsányi. That is why it is important to find out how open the candidate is and whether he or she is interested in the latest technological trends, whether they are able to keep up with current trends, as changes happen at a dazzling pace in this profession – only the best keep pace.

They’re the black belts.

More than 20 years have passed since Netscape Communications came out with JavaScript (JS). Initially, at the time of static websites it was not taken very seriously. It was an insignificant add-on tool that solved smaller tasks such as changing the background colour or made a window ‘pop’. Later, with increasing consumer demands, user experience (UX) solutions came along, making web pages more clear-cut and more functionally sophisticated. At that time, the role of JS became more appreciated. Applications with greater complexity were built with it and today significant amount of web pages are run by it – from small web shops through to the largest community networks for many areas of science.

As time went by, code sizes increased of course – and let’s pause here for a moment.

Imagine a dictionary that contains words of a language. For easier orientation, verbs are in bold and nouns are underlined, that is, each word class has its own distinctive mark. Would it be easy to find an expression? Well, not really. Different fonts help a little, but since all the letters are black, by skim-reading a page the word you are looking for won’t catch your eye.

The vocabulary of JS language, code base, is divided by a kind of similar logic. There are some guidelines to help identify the different types of genres – but it does not provide significant support for developers. That’s why we say that JS is a “weakly type language”.

So, when JS’s area of use was expanded along with the code base, this weakly-type nature made the developers’ work more time-consuming. When several tens of thousands of lines of a complex program text do not differentiate the types, the developer sees only an extremely abstract, meaningless text. Their work will be really difficult. The larger the code base, the more problematic error-free interpretation becomes. They have two choices: either they interpret it by themselves (which is not typical for complex programs) or they make their own documentation for themselves (and their team mates). The latter is, again, a time-consuming task and although it is necessary, in many cases it is not the most effective method of development.

Is there no other solution? Yes, there is! Another option is TypeScript (TS).

This language is a superset of JS: it uses types and applicable methodologies known from other programming languages. While programming, you know exactly which point of the code you are working on, you do not have to guess or try to read documentation. If we stick to the dictionary metaphor, this means using colour highlighters instead of bold and underlining, verbs are yellow and numerals are red. TS can be used in simpler cases too, but the true reasons of its existence are problematic codebases. It is a real help when many people work on a project at the same time, solving tasks that require frequent consultation.

Since you have to keep defining types during encoding you need to use the ‘highlighter’ – work is somewhat slower than with a weakly type language program. But later, the invested time will be profitable: the final program will have less errors. It will be more sustainable, more durable, the use will be more problem-free. Just as it takes longer to learn a thousand words of a foreign language than it does to learn twenty you will find it is much easier to use the acquired knowledge afterwards!

Today TS is less used than JS but we at BlackBelt find supporting new colleagues very important, and with our help they can quickly learn how to use this programming language We are convinced that it is worth it, partly because Microsoft stands behind it, which means a professional guarantee, but also because of the continuous improvement and expansive tool support. Tools that speed up programming and reduce bugs. The JS-related tools are far from being this effective.

And what does all this mean for the client?

As I have already mentioned, using TS provides a more stable basic and safer operation. With the program there is less chance for developers accidentally making bugs. But if it still happens, bugs can be corrected faster and easier.

The TS code used by developers, is transformed by a translation process into JS. Say that I’m writing my book in Hungarian because this is the language with which I can really express myself, but my target audience is English, so I translate it. As the translator generates a JavaScript code from the TypeScript source code, the program will work in any JavaScript enabled browser or even on the server page. No external program or plugin is required.

Considering all of this, our experience shows that TS is the obvious solution for larger or more complex projects.

And we are not the only ones who think so, the graph below shows an increased interest in TS:

Who is Satoshi Nakamoto? It is still a mystery to this day. It could be a single person or an entire company behind the name. What is certain is that it was under this name that at the end of the last decade a detailed concept of a decentralised system has been published. This system reveals how to store and move money without the involvement of the classic monetary players (e.g. banks, clearing houses) and thanks to digital signature technology, without the risk of counterfeiting or subsequent modification. In order to achieve this, a data structure called a blockchain was developed and also a protocol for adding new transactions. Thus, it became possible to handle money electronically with no third party.

Képtalálat a következőre: „blockchain”

A blockchain is a chain of blocks, or packages, that contain hundreds of financial transactions that have occurred all over the globe.  Each block contain a reference to the previous block and are linked so that you cannot take one out and connect its neighbours with each other. To edit one block would create a knock-on effect which would be noticeable. This means that there is a secure ledger of transactions, imagine the blockchain is a huge accounting book and each block, a page.

Creating a block and linking it to a chain is possible only by solving complex algorithms which result in a transaction. New transactions get added to the blockchain network’s transaction list when about 350 of them are collected (originally 1 megabyte, but this varies from blockchain to blockchain), then they are all linked to the chain as a block.

And that’s where block miners come into the picture.

Miners set up high-performance computers to solve these ever increasingly complex mathematical problems. What does the algorithm look like? First, we need to understand hashing.

A hash function takes an arbitrary amount of input data and by using a mathematical process creates a string, which is output data of a fixed size hash. The “mining machine” first calculates the block’s hash and the process begins as such: find the number (the nonce) which, if integrated into the block’s hash, will create a new hash that will meet certain characteristics, e.g. it will start with four 0s. Of course, the number is quite difficult to find. However, as the blockchain mines of the world perceive that a block of transactions has been executed, they then start to compete with each other in solving the current puzzle.

A world-wide competition begins. And what is the purpose? The reward!

That is, an amount of cryptocurrency awarded to the miner who was the fastest in puzzle-solving. Cryptocurrency (Bitcoin, Ether etc.) has a current rate fixed in dollar, euro, etc.

Bitcoin is a blockchain based digital currency. With its protocol and software, a shared ledger is operated by the computer network, through the internet. Although Bitcoin’s market capitalisation is currently the largest, there are now hundreds of cryptocurrencies. These systems can not be stopped or removed, due to their global decentralisation. There is no such geopolitical situation or secret service that could stop their operation. The blockchain can not be turned off.

Képtalálat a következőre: „bitcoin”

And who is behind the mining machines?

Anyone who is able to build a target machine that is worth about two or three thousand dollars.

Larger mines however, are huge machine rooms, hangars, where tens of thousands of machines endlessly whir, solving complicated mathematical puzzles. The owners are individuals, companies and even states with an access to cheap and reliable energy to meet the ongoing and enormous energy requirements of mines. It seems that at the moment it is China where the most suitable conditions are available, but according to the news, Russia is preparing for mining cryptocurrency on a state level. Electricity companies try to cooperate with miners there, providing them with cheaper electricity. In recent weeks, North Korea has started a massive mining project. In Austria, two sisters founded a company called HydroMiner a couple of years ago. They’re mining with energy produced in hydroelectric power plants, which is cheaper and environment friendly.

But how good is this business anyway?

At the beginning of this decade, Bitcoin mining was easy, and many people started it simply out of curiosity. According to an urban legend, Bitcoins mined by a former IT student from Budapest during his university years this early summer are now worth more than 300 million forints and he is planning to retire if their worth reaches 1 billion forints.

Today, Bitcoin mining is difficult and it is much easier to mine other currencies, such as Ether. However, you can easily purchase cryptocurrencies on the market where the exchange rate is based only on supply and demand and is not influenced by legitimate means of a central party. Of course, this has a downside. For example, at present the exchange rate is extremely volatile. In a single day it can move thirty percent in one direction and then next day twenty in another. The cryptocurrency market is a playground of speculators now, just under one percent of the Earth’s population is involved in this popular game.

However, this statistic will surely rise.

Forecasts indicate that Bitcoin’s exchange rate will explosively grow in the next ten years, and according to some estimates a single Bitcoin will increase to the worth of 50 thousand dollars. At present, however, we are still far away from that. Yet, although it is still typically an investment tool, it will be accepted as a payment at more and more places.

In Japan, most places readily accept Bitcoin. Cash machines display transaction details in a QR code (amount, Bitcoin address, transaction identifier), the wallet on the phone reads it then initiates the payment. The transaction – as we talked about earlier – is written into a block. The bank account is the ‘address’ where the payments will be accounted to. The sum of the phone payment will be sent to the shop’s address.

But what does any of this have to do with tulip mania?

The flower bulbs mentioned in this title were brought to Europe by the Austrian emperor’s ambassador to the Turkish sultan. Then in the late 1500s, a Dutch botanist started to cultivate them in Holland. People went crazy for them, and the tulip mania made tens of thousands rich. Some rare tulips sold for more than several times the price of an average house, that was until buyers decided to start to avoid auctions. The reason for this was a bubonic plague, and with that, the bubble burst and that was the end of the tulip exchange trading. Many people cite this story in the context of Bitcoin.


Yet the developments so far suggest we are soon entering the era of cryptocurrency.

“We have mentoring from the beginning. I think it’s a very effective tool. It’s a great feeling for a senior colleague to be listened to, and it is a big help for a new colleague if there’s someone to turn to. Moreover, I see that these stronger relationships are lasting, even after the mentee becomes more independent” says Gábor Privitzky, technical director of the company. Every new colleague is helped by a more experienced one, during their starting period for as long as they feel it’s necessary.

Hermina Bán arrived half year ago, she works as a junior front end developer. Although she has one and a half years of experience she admits she would have felt lost without her mentor, Norbert Herczeg. “I have been receiving tons of help from him, and if I feel stuck I go to him without hesitation. It is really reassuring that I have the support of an experienced colleague. For example, last week he stayed in the office with me until 8pm because I was stuck with an after-development test. More eyes see more so together we found the problem easily and remedied the situation.”

According to Privitzky, sensitivity is a must in mentoring, he also believes that it is really essential that partners should be compatible with each other. Hermina’s case was a success. Hermina tells us “I have great professional respect towards my mentor, which is important to me. It is good to know that during work I can ask him anything, and even if he doesn’t know the answer he is happy to learn about it together with me.”