Do you like shopping? For some it may be a chore but the advent of online shopping does make life easier, especially for the savvy shopper and not only in everyday life, but also when Christmas is coming. One can barely think of a gift idea which would not be available within a few clicks.
Have you ever wondered, from an IT perspective, what level of expertise is required for this? It is our natural expectation that web shops run smoothly during promotions or in busy shopping periods, but just a few years ago that a half-day online promotion could quite possibly crash the entire web site.
Today, this issue has become relatively easy to solve.
Professionals have found the answer to the problem of how to make the performance of computer systems load adaptively and dynamically, with the least possible human interaction. They have long ago given up the concept of building a giant, powerful computer to achieve optimal performance – which is called vertical scaling – because after a while this machine’s capacity became limited and it just was not worth it economically. Instead, they combine lots of everyday PCs’ performance to optimise resource usage. This latter solution, which is called horizontal scaling, is cheaper and more flexible.
This method can be utilised by containerisation technology, too.
This means that different applications can be handled simultaneously, in packages. They are delivered in containers and the containers themselves are run by a platform created specifically for this task. The developer inserts the program they made into a container: the cluster built of many small machines, which is an ’infinitely’ scalable resource set and only has to be prepared to run containers.
Why is containerisation revolutionary? It takes several minutes for a traditional virtualisation to launch a virtual machine, by using containerisation it is a question of milliseconds. Let’s imagine the resource demands say at a Christmas promotion launch, when the webshop is flooded by more than 100,000 users who all start shopping. In the times of traditional old tools, the system administrator got a red light, so they began to plug in new machines. This required waiting. Visitors got annoyed at error messages. But with the help of containerisation, today customers are not experiencing any of this. The whole process is automated. A sensor monitors the workload and as soon as it reaches the threshold, new sources are mobilised. Without any human intervention, the system is adapting itself to resource requirements.
A few years ago, Google made some open source software from a potential support tool for containerisation and horizontal scaling. Thousands of programmers contributed to its development with their pro bono work.
It is called Kubernetes.
From the Greek word for “navigator” from which the word ’cybernetics’ also comes. In ancient times, sailors needed to be bold and resourceful. Travelling across the seas without maps or navigating tools, independent thinking was a constant need for them. It is likely that the same goes for the professionals at Google, who a few years ago created this development for accelerating operations and simplifying container management tool. By using it, software developers need less time to manage infrastructure, while they have more time to develop applications.
And this is also necessary.
Today, more and more web shops announce promotions that only run for a few hours where their products can be bought for an incredibly low price. On these occasions the traffic may be vastly increased for a short time. Whether it was the availability of the technology that prompted these campaigns or the demand from customers that made the developers work harder is unknown, but it is a fact that today providing this kind of technology for such shopping sprees not a problem anymore.
Meanwhile in the background, the developer can make Kubernetes run from a certain application, say, in at least three to ten instances, depending on the traffic. The system then will automatically decide that out of a pool of 100 machines which ones will run the applications.
In the meantime, the developers are catching their breath. Which they will use as an opportunity.
Many of them will grab the occasion to improve tools like that.
(The 2nd part of this article is to read here.)
A good business idea pops up. Next comes the partners and the question: will the concept prove to be competitive? Potential investors need more than just a few flashy slides about vision and strategy. So, what would be the best way to present a prototype? András Wolf, Sales Director at BlackBelt Technology, thinks it is best to rely on outsourced product development.
As this expert puts it, “We come into the picture when the existing extravagant website is no longer enough and the use of a tried and tested technological solution becomes necessary.”
Their goal is to ensure that new enterprises are launched with the best suited, custom fit technology from the very beginning. That way it wouldn’t be necessary to spend vast amounts of money when a functioning prototype of their product or service has to be presented to a venture capitalist or a business angel investor. Wolf goes on to say, “We get down to work when a Startup has only acquired a few clients, and we work together until they have clients well into the thousands. A fresh company could not scale up their capacity like that, even though it carries an inherent technological risk. So, our service is a kind of insurance for both the small enterprises and their investors.”
Many try to ‘manage’ the issue by developing something quickly for the presentation, but as both the Startup and the concept change and mature over time, these initial systems are usually scrapped. “We, on the other hand, create a stable, company specific, unique technological environment, which is suited to accommodate modifications, modelling and even creating new prototypes. This makes our solution lasting and functional for a long time.” Wolf explains.
A crucial component is the fact that these do not run on the Startups’ own servers, but through a web-based, cloud-based central system, which is typically accessed by registering through a website. This is a scalable solution, meaning that the service providers are free to set their own levels depending on the number of their clients and the extent of utilisation. Wolf observes “Local hardware has many inherent problems and it also becomes obsolete in a couple of years. Our specialists on the other hand move the application to the cloud, continuously optimising the scale. The difference adds up to serious savings each month.” At BlackBelt they use remote access; they work with domestic and international cloud providers of the highest professional standards and recommend them to their clients. They help to select lasting and affordable technology.
In addition to the JUDO platform they have developed in-house, they also offer consultation (utilising DevOps) to their clients. They align their thinking with their clients, as well as monitoring and guiding the product development lifecycle.
Professional experience and significant savings are both guaranteed by BlackBelt. Neither require cumbersome negotiations or meetings either. Wolf reveals “We coordinate through video conferencing if needed, and we can also hand a complete solution and its operation to the client, if they want us to. They can log in to those servers that run their applications, but if they decide they would rather have us operating it, we will do that as well. In brief, they come to us with challenges and the ultimate outcome of our cooperation is a new and unique system.”
They have done projects in a number of different sectors, from telecom and finance through retail and logistics to medical technology. They carry out development for both the Hungarian and the Western markets.
Wolf earnestly says “By working in many industries we have encountered a diverse range of business challenges. That’s what we specialise in. We seek solutions for situations that had never been addressed.”
BlackBelt also help in the implementation of those ideas that are left lying in the drawer. Who hasn’t encountered the scenario where ideas for innovation pop up only to be tossed around for years due to the lack of time and resources until, as in many cases, they are ultimately forgotten? Since revenue is generated by the main activity, no resources may be reallocated from there. Now enter András Wolf and his team, who undertake to implement these plans. Due to their area of expertise, they are most certainly dealing only with the technological aspects, thus the business idea is also safe. Wolf explains, “The business concepts to be implemented are in good hands with us. We are experts in our own industry, we are not in the business of evaluating the marketability of other companies’ ideas, so we just don’t do that. But we are committed to the technological implementation of long espoused plans. We aim to be our client’s long term, reliable partners.”
Have you ever seen a rugby game? Then you surely have seen how the players face each other, their heads interlocked, they cling together and push forward with all their might. They are trying to gain possession of the ball thrown in from the side. In English this is called the ‘scrum’.
One of the most popular methodologies of the agile method was given the same name, presumably suggesting that the (developer) team members are fighting for success shoulder to shoulder. Everyone has to work together – and that’s what provides the benefits of scrum: the continuous conversations, iterations, help customers to review and modify the project every few weeks. They can see the results not just when the project ends, but throughout its development.
However, despite the obvious advantages of the scrum method, many companies want to see if their budget can be reduced in the process due to the reviews they see.
How costs can be controlled.
A given specification – which is called the backlog (BL) by the agile method – is made-up of subtasks i.e. stories. It is important that these stories are short and straightforward, thus, developers will have a great deal of freedom in its execution. The time requirements of the stories are estimated by the members of the 5-8-member team, and based on this, story points are ordered into subtasks. Cost estimation can also be based on these.
Let’s look at an example of a webshop. BL notes that the website should include all the features the client wants. It should have an administrative interface where you can upload the products, and a customer interface too. It should allow the customers to pay and to decide how they would like to receive the product. Here, registration is the first story, login is the second, the purchase is the third, and so on. The team estimates – in a thoroughly regulated backlog refinement meeting – the worth of each function in story points, then they make a calculation for an amount per point. Registration is relatively simple, it is typically worth two points. The price per point is therefore doubled, let’s say it’s two hundred thousand forints. Maybe the customer wants a cheaper solution – it is possible with this methodology, unlike with the Waterfall model. For example, the client can decide that they allow purchasing in their webshop without registration, or they can decide there is really no need for registration at all. In this case, the team re-estimates the number of story points therefore changing the cost.
The customer has already saved an unnecessary function and possibly even a fair amount of money.
In scrum methodology everything, including roles, is precisely defined. The Product Owner (PO) writes stories with the customer. He is the “master” of BL. He connects the team and the customer. However, it’s not him, but the team who decides how many tasks they can complete in the next few weeks. The project approach is based on commitment and responsibility.
The scram master knows the methodology thoroughly. They are the one who is responsible for the fulfilment of processes and commitments. They also carry out project management tasks: they are responsible for the team’s cooperation, they deal with problems that arise. They run the daily 15-minute stand-up meetings precisely regulated by the scrum guide, where team members discuss what they did the day before, what tasks they expect to do that day, and what obstacles that might have arisen. These three topics should be discussed every morning by team members.
The guide also makes retrospective discussions mandatory: at two-week meetings, developers look back on the work done in the past iteration, draw lessons, and lay out further directions of development.
The client’s only requirement is to sit in a room once every a few weeks and see how the development is going. If it is necessary, they can intervene.
This used to be unthinkable before.
In my experience, customers are demanding more and more transparency, and it is important to them to have a say. The agile contract includes unit costs, and the development team makes sure that they always prepare what’s important for the client. At fortnightly meetings, they can keep track of where the work is going, or sometimes they can meet the developers – whilst earlier they could only contact them indirectly through the project manager.
These days, a whole industry has been built on teaching scrum. You can attend training sessions and workshops to learn how to put the method into practice. However, the essence of scrum can be summed up in three words: flexibility, commitment and real teamwork.
Just like in rugby games.
A dissatisfied customer, an off-the-track development, a deadline delay caused by some communication error – who wouldn’t have nightmares about such things? One decade and a half ago the signatories of Agile Manifesto have laid the foundations of a new software development approach. The method works great, in our experiences too.
Let’s imagine a huge, complex project. The development team has been working for a year and a half, they have stuck to the contract word for word. The software has been delivered, the deadline was kept, and they are certain they have acted exactly in accordance with the customer’s requests.
Except the anticipated ovation is missing. The registration interface is unsuccessful as identification is QR code-based but the client didn’t specify this in the contract and wants a bar code instead. The signatories on the client’s behalf have never dreamt that any other solution could possibly exist. The project was completed by the target deadline, but both sides were disappointed. A lot of effort has been put into the project but the expected results have not been achieved.
Is this story familiar? Most of the experts working in the software development world have already had some sort of similar experience.
The recurring problem is the use of the Waterfall model.
Described in a paper by Winston W. Royce as a project that continues thought a linear series of events each completed in a strict order with precise deadlines and budget, requiring teams of permanent members and minimal interaction between the client and the IT team during development. Briefly and subtly, these are the characteristics of the Waterfall model.
And even though Royce saw clearly and pointed out the deficiencies of the method in the seventies, the software industry just ignored these warnings and limitations. Lengthy preparation and extensive documentation made us falsely feel that everything was well-prepared. We were well aware of everything, we could see exactly what tasks had to be performed. But the truth was, we were just admiring a mirage.
Of course, there are some industries where it is not possible to work with alternative ways of thinking or project management methods. Public companies, public procurement and the banking sector are typically like that. No excuses can be given, what and when the developers deliver must be precisely clarified not to mention the cost, too. Due to hierarchical organisation structures and the regulation of tenders all of those involved have their hands tied, and this is not only true nationally.
However, in business life, in product development and in more “horizontal” and decentralised organisations there are only a few types of projects that justify the use of the Waterfall model while another option might exist.
And indeed, it does exist.
In 1986, two Japanese organisational research specialists proposed a number of innovations to overcome the disadvantages of the model: they called for self-organising, flexible teams and stressed the importance of continuous learning within organisations.
Then, by the second half of the 90s, these proposals were already being successfully used in practice by several software development pioneers, such as by the creator of Extreme Programming, Kent Beck. It was then that Jeff Sutherland and Ken Schwaber began to systematise the already introduced practical elements – from which Scrum as we know it was born. They believed that you should respond flexibly to customer needs. They then later collaborated with 15 other software development gurus in 2001 to create the Agile Manifesto.
It is extremely rare that there are no historical-political reasons behind such an action, only professional and business motives.
Like an oasis in the desert, the Agile Manifesto is an exception to this rule.
Its most important that recommendations and concerns are acknowledged and close co-operation with the customer, effective communication and openness to change are encouraged. The agile methodology is based on handling software development issues, so its key element is flexibility. Development is divided into iterations built upon one another, and each iteration is preceded, accompanied and followed by reconciliation with the client. With this method a thorough discussion takes place during laying out the basics, but there are opportunities to change, modify and fine-tune the plan on the move during each 1-3-week phase. It is not uncommon that the client itself asks to change their original concept. Another important element is the retrospective of development iterations – an overview of the process which helps to realise what should be improved or done differently next time to make the process more effective or digestible for the parties involved.
There were several companies that soon recognised the power of agility, plenty of others have gradually shifted to the model in recent years. There are some, of course, to whom this approach still remains alien.
There are typically two reasons behind this aversion. Firstly, some find it difficult to handle the fact that you can not tie a strictly fixed deadline and resource requirement to a project. Secondly, it could be discouraging to realise that continuous consultations require resources from the customer too.
In other words, the client has to deal with the current IT project from time to time to approve it, or if necessary, change its course. This requires attention and energy. In addition, not only does the development have to be agile, but involves other organisations too, which means a flexible company, quick decision making and effective communication. Again, these are not attributes that everyone has.
By contrast, there is another factor to consider.
And it includes a precise result, an IT solution dreamt by the client. With minimum margin of error, the customer gets exactly what they expect. There is no waste of money. This is the case when the old saying “good work needs the time” is one hundred percent true.
Overall, I think about those organisations that have already been “burnt once”, their situation is ripe for change and there is a sense of urgency they would probably never want to return to the Waterfall era once experiencing the Agile Method. It is no coincidence that the agile methodology is far more common and widespread in the West than here nationally.
Although the transition is never painless, my experience shows that dedicated work always comes into fruition.
And some well-deserved ovation.
Creating a customer portal for the corporate telecommunications and ICT services arm of Invitel Group was merely the tip of the iceberg. Perhaps even more important in the course of their joint work with BlackBelt was the modernisation and interconnecting of their IT systems and cleansing of their data. Or as the Customer Service Manager puts it: they sailed into a new world.
It all started in 2015. It was then that Invitel’s leadership realised that in order to increase the company’s competitiveness, an even greater emphasis must be placed on customer focused operations. The need for renewal resulted in the launch of various programs, one of which was the development of a customer portal.
“As it was the case with most of the players on the market, we did not have an online interface either, one that would have allowed our customers to handle their business 24/7 in a self-service way, or just to check what information we had about them. It was around this time that the technology matured to the point where it became possible to develop a more comfortable self-service system, giving the customer the feeling that instead of plodding on, they are really moving forward. This was a defining criterion for us” recalls Mr Csaba Ilosvay, Corporate Customer Service Manager.
Ten companies submitted a bid to the customer portal tender and BlackBelt was the winner. “We gave each of them a test assignment. Using the database components handed to them BlackBelt not only reverse engineered our operations, but they even supplied a working prototype for the most basic functions. Their experts built the hierarchy from the traffic data structure and created a partial model for our business processes. We were surprised that they managed to pull it off with such little information provided, and they did it with such professionalism that the outcome was more in-depth than the kind of documentation that we had about our data connections ourselves. It became immediately obvious that they react speedily when needed, and that their development framework, Judo, can create a product extremely quickly and efficiently when provided the right parameters.” Ilosvay explains.
Thanks to Judo, the development environment created by BlackBelt, new issues did not require coding from scratch, merely the selection and customisation of the proper modules. As Mr Ilosvay points out it still required a massive amount of coordination in the beginning. Being the client, they often had very strongly held ideas about the course of the implementation, while BlackBelt’s specialists would often challenge these in a constructive way, based on their own experiences. It took both companies thinking together continuously to bring about the best possible results. “They led us along the path with a firm hand, not allowing us to get stuck in a rut. They treated us as partners, they assumed diligent ownership of the project, as if creating their own portal. Meanwhile, we also had to work on the project ourselves; portal development requires teamwork, and we also had to develop our own background system. We really put in the effort for this joint success. Whenever we got stuck we would all ponder the issue a little, then we had a creative joint brainstorming, that gave us a kind of a catharsis. And this way they saved us from a lot of unnecessary cost components, so overall this was a very efficient project.”
According to this IT professional, the customer portal was essentially a desirable part of a long process whereby things were tidied up; IT systems were modernised and then interconnected, data was cleansed. They consider their clients to be partners; when using the portal, they see the same data as the customer service staff. This means that customer service work is even more about the effort to find a joint solution. The interface – the visual appearance, the inner mechanisms and solutions – is under continuous development, and over 100 customers are already using the system.
Not in the least, a brand-new era has dawned for Customer Service. With the creation of integrated systems and safe channels of computerised communication, it became easier for the call centre staff to remotely work, and they are taking advantage of this opportunity several times a week under well-defined rules – this also makes the company more attractive for new recruits. Therefore, the project meets the needs of both customers and colleagues. That is a huge added value for any employer.
“When we evaluated the bids, one of our criteria was the social and human benefit the project offered. This joint work started a kind of an upward spiral here at the company, which allows us to declare that Invitech has arrived in the present, to the forefront of the current global market. Now we are headed to the future.”
Developers exhausted by stooping over several ten thousand lines of programs can breathe easier if they choose TypeScript for major projects. We at BlackBelt do so in many cases. The result: less error, more sustainable and problem-free use – and more satisfied customers.
As time went by, code sizes increased of course – and let’s pause here for a moment.
Imagine a dictionary that contains words of a language. For easier orientation, verbs are in bold and nouns are underlined, that is, each word class has its own distinctive mark. Would it be easy to find an expression? Well, not really. Different fonts help a little, but since all the letters are black, by skim-reading a page the word you are looking for won’t catch your eye.
The vocabulary of JS language, code base, is divided by a kind of similar logic. There are some guidelines to help identify the different types of genres – but it does not provide significant support for developers. That’s why we say that JS is a “weakly type language”.
So, when JS’s area of use was expanded along with the code base, this weakly-type nature made the developers’ work more time-consuming. When several tens of thousands of lines of a complex program text do not differentiate the types, the developer sees only an extremely abstract, meaningless text. Their work will be really difficult. The larger the code base, the more problematic error-free interpretation becomes. They have two choices: either they interpret it by themselves (which is not typical for complex programs) or they make their own documentation for themselves (and their team mates). The latter is, again, a time-consuming task and although it is necessary, in many cases it is not the most effective method of development.
Is there no other solution? Yes, there is! Another option is TypeScript (TS).
This language is a superset of JS: it uses types and applicable methodologies known from other programming languages. While programming, you know exactly which point of the code you are working on, you do not have to guess or try to read documentation. If we stick to the dictionary metaphor, this means using colour highlighters instead of bold and underlining, verbs are yellow and numerals are red. TS can be used in simpler cases too, but the true reasons of its existence are problematic codebases. It is a real help when many people work on a project at the same time, solving tasks that require frequent consultation.
Since you have to keep defining types during encoding you need to use the ‘highlighter’ – work is somewhat slower than with a weakly type language program. But later, the invested time will be profitable: the final program will have less errors. It will be more sustainable, more durable, the use will be more problem-free. Just as it takes longer to learn a thousand words of a foreign language than it does to learn twenty you will find it is much easier to use the acquired knowledge afterwards!
Today TS is less used than JS but we at BlackBelt find supporting new colleagues very important, and with our help they can quickly learn how to use this programming language We are convinced that it is worth it, partly because Microsoft stands behind it, which means a professional guarantee, but also because of the continuous improvement and expansive tool support. Tools that speed up programming and reduce bugs. The JS-related tools are far from being this effective.
And what does all this mean for the client?
As I have already mentioned, using TS provides a more stable basic and safer operation. With the program there is less chance for developers accidentally making bugs. But if it still happens, bugs can be corrected faster and easier.
Considering all of this, our experience shows that TS is the obvious solution for larger or more complex projects.
And we are not the only ones who think so, the graph below shows an increased interest in TS:
Too difficult? Seems like an exam? Sorry, that’s how I do an interview – claims Gábor Privitzky, Technical Director of BlackBelt Technology, who insists on serious professional recruitment from the start. Only the best is chosen by the company but they feel that at last they have a workplace where they can evolve day by day. And the team is great too!
“There were cases when after our client personally met some of our people that we’d selected for him reacted like this: now you just give me the names of the rest of them. There is no need for interviewing anymore, they are just right for us! ” says Gábor Privitzky, the Technical Director of BlackBelt Technology, proudly. He explains that the strict professional filter introduced by the company might be resource-intensive but is beneficial in the long run. It builds trust and closer business relationships among their partners. “If our consulting service customers see that we always delegate work to skilled and well-prepared professionals for joint projects, they will call us again.”
Like on Who Wants to be a Millionaire
Based on his many years of experience, the director of BlackBelt has insisted on taking the professional recruitment process seriously. In his former workplaces there were several cases where professional communication didn’t work well with some of his colleagues, which obviously affects efficiency. “There are companies where the selection goes only by CVs. Are you good at C++? Are you familiar with Linux? Awesome! You’re hired! The problem is that this way the filtering process is left to be done by the client. Which is not fair. We have to make the selection.”
Naturally, not only does technical knowledge count but professional aptitude and attitude too. How much the candidate would like to work here, and what their professional interests are. Privitzky stresses, they are testing how much attention the candidates have paid at the university. “Those who do not pass our interview stage usually say that it was like a college exam. Why, yes! That’s how I do an interview. Candidates often get algorithmic tasks. I often don’t even care about the solution itself, only the way the applicant thinks. Do they know how complex the algorithm in question is? Do they realize the exponential, polynomial, square, or linear nature of the problem? If they have learnt something similar at university, or if they were thinking about questions like that, then their answer will be good. It has happened of course that someone showed us not only his thinking, but he gave us a concrete solution. That person is working with us right now.”
On the other hand, it also matters how the candidate will handle a task that is too difficult for him. It is not good if they speak confidently but vaguely, but it is not good either if they just give up. Thinking loudly about how to approach the solution is much more advisable. “It’s a bit like playing Who Wants to Be a Millionaire? You can go around the topic or ask for help”, notes Gábor Privitzky.
Balázs Stasz has worked at BlackBelt for two months now after a successful job interview. Although his solution was not perfect, it was a good approach to the task. “I didn’t feel too overwhelmed in general. I think a lot of the answers to the questions asked would fit every developer” said the graduate student of the Budapest University of Technology and Economics who is currently a Java developer.
Based on the experience of higher education, Gábor Privitzky sees good training courses where students are preparing for the use of acquired knowledge and thinking instead of teaching to incorporate the details of technologies that are currently considered important.
“However, you also have to have language skills, it is extremely important if you want to work with us” says Privitzky.
Of course, no interview is without a professional filter that will have to go through the riff raff as well. Senior Developer Ákos Mocsanyi has recently selected the best candidate for DevOps Engineer (Developer and Operator). “I was curious about personality, operational experience, and how confident the candidate was. It was important not to get caught out when I asked at a point in the interview: write a program, and I’ll be watching it. The task was not too difficult but you’re interested in how the candidate handles an unknown machine and tools. Yet there were a few who were thrown by it. ”
Not like Levente László, who proved to be the best during the contest and started work a few weeks ago at the company. “At first, it was not very comfortable – on a ten-point scale I would say the experience was a five – it made the situation a bit unnerving, and I was even a little shaken. At first, people were nervous, as the environment was new, but then we were able to change a few things and solved the issues. The atmosphere here is nice. My experience is that when it comes to customer care and our interests BlackBelt are all about communication.
This is not a coincidence, of course. Knowledge gained in the IT sector five years ago is already obsolete, so strong professional interest is important. “Anyone who doesn’t keep up will not be able to fulfil their job within a few years,” says Ákos Mocsányi. That is why it is important to find out how open the candidate is and whether he or she is interested in the latest technological trends, whether they are able to keep up with current trends, as changes happen at a dazzling pace in this profession – only the best keep pace.
They’re the black belts.
When Bitcoin first appeared the world largely dismissed it as a fad, yet current predictions tell us it may transform the pillars of our society. At the core of the technology that gave rise to Bitcoin lies blockchain, which some claim to be just as significant as the Internet itself.
But how did it all start?
At the end of the last decade Satoshi Nakamoto* published details of a decentralized system in which it is possible to store and move money without the involvement of the classic monetary players (e.g. banks, clearing houses), while still eliminating the risk of counterfeiting** or subsequent modification. In order to achieve this, he developed a data structure called a blockchain, as well as a protocol for adding new transactions. Through these it became possible to handle money electronically without involving a third party.
A blockchain is in fact a chain of ‘blocks’, or packages, which contain hundreds of financial transactions that have occurred all over the globe. New transactions initiated are added to the blockchain network’s transaction list. When about 350 transactions*** are collected, they are linked to the chain as a block. Blocks are linked in a way to prevent any removal of blocks and then the re-connection of the remaining neighbouring blocks with each other. The chain cannot be subsequently modified, and linking the blocks is only possible by completing a costly series of operations.
And that’s where block miners come into the picture.
Miners set up high-performance computers to solve complex mathematical problems, which are required for a block to be created and linked to a chain.
What do these problems look like?
First, we need to understand the concept of hashing. A hash is a string of characters created by a hash algorithm created by taking an arbitrary amount of input data and creating a shorter, fixed size string, called a hash. The ‘mining machine’ first calculates the block’s hash. The problem to be solved goes like this: find the number (a.k.a. the nonce) which, if integrated into the block’s hash, will create a new hash that will meet certain criteria, e.g. it will start with four 0s. Now, that is difficult to find.
Once the blockchain mines of the world detect that a block of transactions has been executed, they begin to compete with each other in solving the next puzzle.
A world-wide competition begins, with the reward as its goal.
The reward is an amount of cryptocurrency awarded to the miner who delivers the solution first. Cryptocurrencies (Bitcoin, Ether etc.) have a current rate fixed in dollars, euros, etc. Bitcoin is therefore a blockchain based digital currency. Through its protocol and software, the network of computers operates a shared ledger via the internet. Although Bitcoin’s market capitalization is the largest, there are now hundreds of cryptocurrencies.
Due to of their globally decentralized nature, these systems cannot be stopped or eliminated. There is no geopolitical situation or intelligence service that could halt their operation. The blockchain cannot be turned off.
Who is behind the mining machines?
Anyone who is able to assemble a dedicated machine worth around two to three thousand dollars.
Larger mines, however, are huge machine rooms, hangars, where tens of thousands of machines unceasingly work, creating a lot of noise. The owners are individuals, companies and even governments with access to the cheap and reliable energy needed to meet the continuous and enormous energy requirements of these mines. It seems that currently China has the most suitable conditions, but rumours are that Russia is getting ready to mine cryptocurrency on a government level, and power companies are trying to ally with miners to provide them with cheaper electricity. Moreover, North Korea also appears to have started a massive mining project in the past few weeks.
In Austria, two sisters founded a company called HydroMiner a couple of years ago. They’re mining using cheaper and environmentally friendly energy produced in hydroelectric power plants.
But how good is this business?
At the beginning of this decade Bitcoin mining was easy and many people mined it simply out of curiosity. According to an urban legend, Bitcoins mined by a former IT student from Budapest during his university years reached a value of more than 300 million forints by the start of this summer, and he plans to retire if the value goes up to 1 billion forints.
Today, Bitcoin mining is much more difficult and it is significantly easier to mine other currencies, such as Ether. However, you can easily purchase cryptocurrencies on the market where the exchange rate is based solely on supply and demand and is not influenced by any central players’ legal leverage. Of course, this has its drawbacks. For example, the exchange rate is extremely volatile at this time; it can move thirty percent in one direction in a single day, and then twenty in another the next day. The cryptocurrency market is now a playground of speculators, and presently just under one percent of the earth’s population is involved in this popular game. This number is certainly set to rise.
Forecasts indicate that Bitcoin’s exchange rate will explode over the next ten years. According to some estimates it may even go as high as 50,000 dollars. At this point we are still far from that, but even though it is still typically viewed as an investment tool, it is also increasingly gaining acceptance as a means of payment. Many places in Japan already accept Bitcoin. Cash machines display transaction details (amount, Bitcoin address, transaction ID) in a QR code, the wallet on the phone scans it and initiates the payment. The transaction – as mentioned earlier – is then written into a block. In this specific context the bank account is called an ‘address’ where the payments are moved to and from. The amount transferred from the phone will be sent to the shop’s address.
But what does any of this have to do with tulip mania?
Well, the flower bulbs mentioned in the title were brought to Europe from Istanbul by the then Hapsburg Emperor’s ambassador to the Ottoman Empire. Later, in the late 1500s, a Dutch botanist started to cultivate them in Holland. People went crazy for them, and this ‘tulip mania’ made tens of thousands extremely rich.
However, this all came to a standstill when buyers suddenly started to avoid the tulip bulb auctions. Although the main reason for this was the bubonic plague, the bubble still burst in an instant, and that was the end of the tulip exchange trading.
Many cite this story in the context of Bitcoin. Yet the developments so far suggest we are entering the era of cryptocurrency.
* person or company – their identity is obscured
** to initiate the transaction requires the digital signature of the holder of the money
*** originally 1 megabyte, but it varies by blockchain
The Kubernetes technology has already been detailed in our former article. At that time, we highlighted its benefits regarding horizontal scaling. However, there are other situations where this technology has been a breakthrough.
Let’s have a look at the software development process in a modern, agile approach. This process is based on a regular repetition of the tasks in the elements of a software development lifecycle, like design, analysis, programming, executable format, unit testing and acceptance testing.
Post-programming tasks can be heavily automated, which is necessary to ensure that testing does not deprive the team of human resources, and also for providing quicker feedback to developers and thus verifying whether the latest changes were successful. These automated steps practically constantly integrate the developer’s source code changes into the app. That’s the reason why this phase is called ‘Continuous Integration’. It is easy to see that this methodology, compared to traditional waterfall-type methods, where one often has to wait months for feedback, is much more flexible and has a lower risk. This results in projects that are much more easily adaptable to ongoing changes in the business environment.
Continuous Integration is furthered in Continuous Delivery and Continuous Deployment. Continuous Delivery is where one makes the newly built feature available quickly and regularly. As a part of this, they are automatically relocated to a test server which is not yet the client’s server. This is an intermediate system, where potential users can have a look. Here the function will be managed by a kind of quality assurance procedure.
Continuous Deployment is the last phase when all this, through automatism, will be relocated to the live system.
The goal during the whole process is to automatically and accurately test the program. However, running a lot of tests takes time and is a great burden to the system. This requires a lot of resources.
And here’s where Kubernetes can have a role again.
Since certain processes require more resources and others require fewer (depending on the number and nature of the tests, the needs of tested components, and the degree of parallelisation of the automatisms) the resources involved in the execution should be scaled dynamically. One of the possible technologies for this is containerisation. Kubernetes has a well-functioning, open-source support platform is well suited for these kinds of projects.
While the task may seem simple from a distance, if we have a closer look it turns out to be extremely complex. Like flying an aeroplane, it’s easy to board the plane to fly to your holiday, but the cockpit of that plane has hundreds of buttons. The same goes here, too. It’s easy to start working with Kubernetes, but you have to master it for efficient use.
The world however, seems to be going in this direction.
In fact, the developer society has practically fallen in love with this solution, because it made a great deal of useful things achievable which was not possible before. The development of related tools has a increased at a dizzying pace thanks to the community, many updates that come out monthly, which makes its use easier and easier but it means that we are continuously learning.
And customers are really happy.
Practical application of the CI / CD concept greatly contributes to the more efficient utilisation of resources. Thus, the customer gets a more profitable, less error-prone program at a more favourable price. As mentioned before, all changes are immediately tested, you do not lose time and feedback happens at lightning speed. While earlier, it used to take time for the operating team to relocate the program to the live system, it is not an issue now. The user experience is also immediately available.
Of course, agility is a prerequisite for all of this. The former Waterfall model comes from the traditional engineering approach of the sixties, from the time of skyscrapers, bridges and spectacular investments. In the software industry, however, it was a huge waste. So, the time has come for a new paradigm, because the customer won’t wait.
And from now on, they do not have to.
Imagine this: The latest development is a failure and there is only one night left until going live because the next day is the statutory deadline for the switch. Meanwhile, the developer cannot be reached. This would never happen with Docker technology.
In commonly used systems, when developers create software or an application like a webshop or a new function for a banking back-end system, the program will be passed on to operators who then follow the guidelines and execute the installation on another server.
Operators and developers work independently: the software is tested by the developers on their own machines and then reproduced by the operators. For instance, a mail-sending system will be set to send mails through the test mail server. Then later, when switching to the “real” mail server, the whole process needs to be repeated.
In many industries, the scope of these duties is separated by strict rules. For example, in the financial sector it is especially important that developers do not have access to live systems. The two teams often communicate only through descriptions because of the geographical distance and differences in work shifts.
This, however, can cause complications due to inaccurate or not up to date descriptions, not to mention any last-minute changes that may have been inadvertently left out from the text.
But there is a solution for this problem: a new platform called Docker.
Essentially, developmental steps are not recorded on paper or shared documents, but in a file, that can be read by computers. Thus, the operator simply has to run the application and they require less knowledge of the internal structure. The running copy is called the Docker Container.
Files that create the environment are called Dockerfiles.
It is also possible to make the updated versions of these files available in the version management system along with the source codes, so a restoration of an older complete system is easier. Files only describe commands, sensitive data (such as database passwords) is provided by operators in a separate location.
So, there is no risk. From development to live system, everything runs through Docker. This means that if a potential bug was detected by the developer then the software would not work for them.
How does this all work in practice?
Let’s stick to the mail example.
Let’s say we’ve written a webshop program that sends customers a letter. To run this application, you need a running environment, for which we define a Docker environment.
In the first line of the file we describe what kind of operating system to boot, and on the following line we specify which packages to install. We also specify that other settings on the server are still required, such as defining databases and external connections. Some of the settings can be adjusted with environment variables related to use.
What is the operator’s job? When he receives the above specifications, he just replaces the address of the test email server with the live system’s server and so the environment is created where the application can run from.
What does this mean for customers?
First and foremost, speed and accuracy. Endless hours are no longer spent seeking out errors or omissions in text based documents, mistakes which are usually only discovered on the very last day before a system goes live.
This solution is much simpler from the side of operations too. There is no need for accurate knowledge of servers and applications as before. Just as using a container was a big breakthrough for the transportation of goods – different sizes or types of packages ceased to be a problem anymore – Docker provides a similar benefit, you can manage different applications in a unified way.
Of course, this solution is not a cure-all. It is not worth setting it up for simple projects. Moreover, there are cases when using Dockerfiles are not even possible. Since the essence of this technology is to describe how to reset a server in a Linux environment, the Docker method can not be used if the solution that you have prepared cannot run on a Linux platform.
However, we definitely recommend it whenever it is complicated to create the environment.
It is also a good idea to use this technology for Java developments if the installation happens on an application server. Likewise, it is great to use Docker for micro-services too. In these cases, each service with a separate life cycle should be run as separate Docker containers.
The speciality of this solution is that although the initial phase of development and the integration of new infrastructure elements might take a little longer due to the continuous maintenance of the environment, this is negligible compared to the whole process. Overall, the time spent on communication and debugging before installation is much shorter.
What’s more, the technology and the software which Docker provides is free.
In the light of all this, I believe that it is time to end the era of descriptions shared in Word documents while running big, complicated projects.