…Is it for the glory? 😊

About 20 years ago it must have been mainly for the brains. These days the pay weights significantly towards the choice of becoming a software developer. A software developer in his/ her 30s could earn annually as much as a mid-50s established CEO – or more.

However, even for that pay, we, the people ‘from the other side’ – e.g. business consultants, lawyers, entrepreneurs, financiers, auditors etc. – can sometimes hardly understand the choice. Why would someone sit for hours in front of the computer and… write some numbers and formulas?

… eventually, we got to get it: it is a means of creation. It IS creation.

TO CREATE SOMETHING.

When you create something with your hands using a palpable material – such as clay or the painting brush or the spray container that gets you the graffiti on the walls -, creation is visible by others as you mean it and others can see it grow step by step in an understandable way.

However, when you create something with the keyboard, it is not understandable by the general public until it works. The magic is seen at the end and generally in a given context.

So, there! This must be the attractiveness of becoming a software developer: the magic of creating something. Intrinsic to all humans is to create and the smartest brains of the planet create via writing code.

Software developers use programming and design knowledge to build software that helps people or institutions achieve certain objectives. They also test and deploy that software based on the specifications they have received.

Today, software developers are some of the most vital people in the many aspects of the economy. Software isn’t just codes, video games and apps, it’s the driving force of every computerized device on the planet.

 

Specifically, What Does a Software Developer Do?

Software developers design, program, build, test, debug, deploy and maintain software using many different skills and tools. They also help build software systems that power networks and devices and ensure that those systems remain functional.

Their job may also involve meeting with clients to determine the needs for a software solution, which will help them design the final product.

While software developers work in a wide variety of industries, these days many are freelancers. Depending on the setting, a software developer may work alone or on a team with other developers and programmers. In general, larger companies tend to have teams of developers due to the complexity of the software they are designing. Outsourcing teams of software developers is quite frequent these days, as well, as one can find multiple skills and availabilities when you outsource or lease software developers.

Who would be the winner of a ‘happy with my work even after 20 years’ contest if implemented globally within the software engineers?

… We would place our bets on several baskets, yet for sure a significant part would weight on the embedded software developer basket. While the rest of the professionals – say, business managers, artists, lawyers, doctors etc. – might believe that the only thing software engineers need to know is how to code and to stay updated to the latest technological developments, there is a huge part of their work that is kept in shadow – intentionally, dare we joke? J So that the competition does not become too tight? J – and that is the source of immense work satisfaction: diversity of tasks throughout one’s career.

The keywords that describe best what an embedded software engineer does are plenty – all conveying towards something that most of the employees of the world aim for. Novelty. Exercise vocation. Create. Connect.

Our 20-year experience in the field made us believe that embedded software engineer is a vocation. It is known for being a niche discipline within electronic engineering, more so than, say, desktop development. Yet it remains highly competitive at all levels. This is, with high degree of certainty, what makes an embedded software developer have a genuine passion and interest for technology and troubleshooting technical problems.

Why do most talented IT engineers become embedded software developers?

IoT devices are now part of our everyday lives and the general pace of technological change and innovation continues to gather unthought-of speed. There has never been a more exciting time to be a part of the embedded software community and become a professional in the field than the days we live in.

The demand for advanced and intelligent technology is largely consumer-driven and, as such, companies adapt their products and create new ones continuously. The need for qualified, experienced embedded software developers becomes implicit… and strong.

So what does an embedded software developer do?

He or she is responsible for designing, developing, optimising and implementing the software that is programmed into devices built around a microprocessor. They write code to solve problems and implement systems that make a physical hardware device work through software. From the concept right through to delivery; from the briefing, writing, testing and fixing stages to final release: these all fall into an embedded software developer’s ‘to do’ list.

To become an embedded software developer one needs a degree in computer engineering or a related field, as well as expertise in C/C++ programming, software configuration management (using tools such as Perforce, Git or SVN) and knowledge of specialised techniques for embedded programming.

Additionally, we, at VON Consulting, recommend for such positions that the future embedded software engineer in a client company’s team has a proven ability to read electronics schematics and troubleshoot problems and, ideally, experience with project management and software development life cycle.

Certainly, the basics presented below are a requirement, and we list them for the sake of accuracy that an embedded developer would always fancy:

Similar to business intelligence analyst and DevOps engineer, an embedded software engineer would preferably feel comfortable collaborating with other teams and third parties – e.g. clients. Ideally, they participate in briefing meetings with the latter, following that they will be proposing solutions and keeping them informed as the project progresses.

Individualistic embedded developers are also good assets to companies, working on specific topics, yet the preference goes towards team-players. Like almost anywhere else in our interconnected world.

… And it is, actually.

It is not us alone saying that, but the large number of young and brilliant engineers looking for a career that combines a passion for data with the ability to positively influence and support an organization.
Of the ‘young’ jobs that have been opened in the past decade for the talented engineers, business intelligence analyst is one. A very trendy one.
What does a business intelligence analyst (BIA) do?

He or she analyzes complex sets of data within a company to determine recommendations for business growth and improvement. Knowing how to properly collect and interpret data can have a significant impact on the future success of a business.
The practitioner who finds this job suited for the talent and knowledge accumulated is generally an engineer by education and a businessperson by formation and experience. Not only does such a person review data to produce finance and market intelligence reports, but also detects patterns and trends in a given market that may influence a company’s operations.

While the business intelligence analyst position is just one of many roles related to BI and analytics in large organizations, the number of such positions and their titles and responsibilities vary based mainly on the maturity level of the company’s data management programs and, mostly, on the essential need of BI that the respective industry requires.
Some multinational companies acting in tech might have BI architects, BI developers, BI analysts and other internally-derived titles.

Generally, a BIA works between IT and business operations; sometimes with finance division, as well. It comes without saying that a BIA works with a variety of people – both within the company and outside it – and with key stakeholders. Such an analyst monitors permanently the essential sources of information, the strategic technological conferences and international events, to remain aware of the business trends and industry at large. A BIA professional might need socializing skills, good communication skills and could have a large network that he or she can access and interact with.

When we recruit for BIA positions, we look for practitioners and consultants who have proficiency in understanding data and doing data modeling, profiling and validation and who gained significant expertise in using data mining, query, analysis, visualization and reporting tools.

Familiarity with database management systems and data warehouse technologies is also required, as well as critical thinking and problem-solving abilities.
The beauty of such a position lay with the fact that the person becomes a key provider of strategic information that the entire business relies. An engineer as well as a business professional; a statistician as well as an analyst.
To the always-frequent question whether a BIA needs to know how to code, our experience in recruiting for such positions showed us that a BI analyst’s familiarity with coding languages like Python, Java or R is often required.

Imagine a team of the smartest people you could find – software engineers. They work on various project sprints – say, a new product development – and you are sure the results will be amazing, as you have selected the best ones there were.

Yet how do you make sure that the operations are developed within the best time frame (e.g. could they work faster?… ) and how do you integrate their work with the deployment team?
… here is where the magic brain and hands of a DevOps engineer come in the game. An interface between development and operations, as the name gets self-explanatory: making sure that everything is geared towards releasing updates as efficiently as possible.
Basically, DevOps is the project manager’s, the facilitator’s or the event manager’s counterpart within the software division.
Ultimately, his or her work is about collaboration and removing barriers to it.

On the technical side and more concretely, DevOps engineers build, test and maintain the infrastructure and tools to allow for the speedy development and release of software.
In a nutshell, DevOps practices aim to simplify the development process of software.

When you invest in a strong DevOps engineer – or DevOps teams depending on the size of your organization and the scope of your project – you will find that:

Even if organizations do not deploy frequently new products, a DevOps is still needed to create and release regular updates to the existing products much quicker than using the more traditional ‘waterfall’ development model.

How do you know that the DevOps engineer is doing his/ her work perfectly? It is when you do not notice that anything has changed . In today’s fast-paced environment, this type of function (read: ‘development’) is quickly becoming a necessity rather than a luxury.

Should a DevOps engineer know how to code? Or better, should he/ she have good communication skills?

… well, a DevOps practitioner needs not necessarily know how to code and needs not be an engineer in the traditional sense. Ideally, however, a DevOps engineer is an IT professional who works with software developers, system operators and admins, IT operations staff and others to oversee and/or facilitate code releases or deployments.

So he/ she needs to understand the IT infrastructure, as they have to improve it (sometimes, even to design it) and they also have to do performance testing and benchmarking – that is, evaluating how well and reliably systems run. These can be considered day-to-day responsibilities of a DevOps practitioner. Engineer, that is.

What else does a DevOps do? While optimizing release cycles, they also monitor and report further, aiming to reduce ‘time to detect’ (TTD) errors and ‘time to minimize’ (TTM) them. Last but not least, they do automation of key processes and keep a sharp eye on security issues.

… kind of cool, right?

Think further when selecting your DevOps engineer: he/ she will be running meetings, setting the schedule for releases and leading the review process, as well as getting hands-on with automation, complex software tools and infrastructure design. All these tasks indicate that one should look for impeccable organizer with strong communication and interpersonal skills. They should be approachable and empathetic. Sometimes, this trait might weight more than their technical skills.

So, find your ideal DevOps engineer and keep him/her close to your company. They are rare and they are precious, especially if they have about 12-15-year experience in the field and are uber-disciplined and charming.

As we said, they are worth their weight in gold.

… one last point: they should understand what an ‘agile’ business means these days.

Codex – a new artificial intelligence technology that can generate programs in 12 different coding languages, can be seen as a tech that will, someday soon, replace humans. But according to veteran programmers who tested it, that will not happen.

What is Codex?

Codex is an AI system that can translate natural language to programming code. It is developed by OpenAI, one of the world’s most ambitious research labs.

About 4 years ago, researchers at labs like OpenAI started designing neural networks that analyzed enormous amounts of prose and by pinpointing patterns in all that text, the networks learned to predict the next word in a sequence. The researchers also observed that the system they built could even write its own computer programs, short and simple in the beginning, learning to do so from an untold number of programs posted to the internet.

Then OpenAI went a step further, training a new system — Codex — on an enormous array of both prose and code. The result? A system that understands both prose and code — to a point.

Is Codex a threat to programmers?

Professional programmers, such as Tom Smith or Ania Kubow, tested the technology, searching for the answer to this very question. Their findings? After several weeks working with this new technology, Smith believes it poses no threat to professional coders. In fact, like many other experts, he sees it as a tool that will end up boosting human productivity. It may even help a whole new generation of people learn the art of computers, by showing them how to write simple pieces of code, almost like a personal tutor.

“This is a tool that can make a coder’s life a lot easier,” Smith said.

Codex can generate programs in 12 computer languages and even translate between them. But it often makes mistakes, and though its skills are impressive, it cannot reason like a human. It can recognize or mimic what it has seen in the past, but it is not nimble enough to think on its own. So it looks like Codex extends what a machine can do, but it is another indication that the technology works best with humans at the controls.

“AI is not playing out like anyone expected,” said Greg Brockman, chief technology officer of OpenAI. “It felt like it was going to do this job and that job, and everyone was trying to figure out which one would go first. Instead, it is replacing no jobs. But it is taking away the drudge work from all of them at once.”

For more details on the topic: https://medium.com/the-new-york-times/ai-can-now-write-its-own-computer-code-thats-good-news-for-humans-661fe86b85af

The semiconductor crisis – there is no secret that the world is dealing with an IT crisis.

Semiconductors, also known as microchips, are essential components in every electronics product, whether it is a simple remote controller unit or a supercomputer.  With the start of the COVD-19 pandemic at the early stages of 2020, the supply chains for consumer electronics were under enormous pressure: people around the world had to find new ways to work and play.

The car industry was forced to shut down factories during lockdown which led to cancelled chip orders. But what they did not take into account, when forecasting a lower demand for the rest of the year due to the pandemic, was the faster-than-anticipated bounce back. This sent semiconductor supply chains into a downward spiral, creating a shortfall in the tiny electronics across a wide swathe of industries.

While car companies like General Motors, Ford Motor and Volkswagen were forced to temporarily shut down production lines and thus cancelling the chip orders, chip foundries like Taiwan Semiconductor Manufacturing Corp (TSMC) reassigned their production capacity for the remainder of 2020 to companies making smartphones, laptops and gaming devices, which were experiencing a surge in demand during the lockdowns. But when car sales increased in the third quarter, chip factories could not meet the high demands and could not respond fast enough.

What was the consequence?

Other industries, especially IT and telecom, experienced a spike in sales due to pandemic’s “stay at home” effect but were facing the same challenge: they found themselves unable to secure adequate supplies to meet the increased demand.

Apple reported, for instance, that the shortage in semiconductors will incur a cost of US$3 billion to US$4 billion in its financial third quarter to June, with the biggest impact felt on Mac and iPad products. Midea Group, the world’s largest maker of white goods like refrigerators, washing machines and air conditioners, said the prices of chips used for home appliances are set to increase as the global shortfall persists.

Xiaomi Corp recently increased the prices on some of its TV models, citing higher prices in key components, while Samsung Electronics and Sony have also raised prices on a range of products.

The United States and the semiconductor crisis

Semiconductors act as the brains that power our technological devices. These chips, now smaller than a stamp and thinner than a piece of hair, have revolutionized the modern world. Innovation in the field has led to smarter, faster, and smaller technology (think pacemakers, smartphones, solar energy, self-driving cars, laptops, airplanes, just about everything you use). They’re also the second largest export in the U.S. and are responsible for 2 million American jobs.

The recent shortage of semiconductors sent American companies, as well as companies around the world, who usually rely on a lean inventory of the chips, into crisis.

US companies like GM and Ford have announced that they’re temporarily shutting down plants because of a semiconductor shortage.

This led authorities to assess the semiconductor manufacturing capacities stateside, especially as the United States are relying on semiconductor as the building blocks of their digital economy. And right now, they don’t seem to be producing enough.

In the US, in February 2021, President Joe Biden issued an executive order to review America’s industrial supply chain, partially to assess why there was a shortage of production in the United States (microchips included). Federal authorities are considering more R&D measures to bring the whole supply chain home.

This is because U.S. semiconductors accounted for half of all global sales, or about $193 billion, in 2020. But only 12% of those chips are actually manufactured in the U.S. So while the U.S. still leads in design, supply-chain issues have become a problem and gives China, for example, a lot of leverage over the U.S.

How long will this crisis last?

As Taiwanese semiconductor companies have boosted production in China, it seems like the semiconductor shortage that has gripped the world could last well into 2022. Intel, the semiconductor giant, on the other hand, warned on July 23 that the shortages can extend into 2023.

 

You can read more about the topic here:

https://www.scmp.com/tech/tech-war/article/3133061/why-there-global-semiconductor-shortage-how-it-started-who-it-hurting

https://www.trtworld.com/business/global-chip-shortage-to-hit-smartphone-market-next-48649

https://fortune.com/2021/07/16/biden-administration-sounds-the-alarm-on-the-semiconductor-crisis/

When a technology has reached a certain level of trust, it’s well-understood and easily managed, you can say that it is ”boring”, thus paying it the best compliment there is. Kubernetes technology has become just that: it is a standard cloud-enabling plumbing that ”works.”

As Jonas Bonér, CTO and co-founder at Lightbend, said, there is a huge gap between the infrastructure and building a full application. This means that, in the near future, they will need to add more tools in the toolbox and to extend the infrastructure model of isolation into the app itself, creating a powerful, yet simple, programming model.

Tesla, for example, relies on “digital twin” capabilities that power its electric grid, capabilities made possible by the combination of Akka and Kubernetes. Colin Breck, a Tesla engineer, says ”The majority of our microservices run in Kubernetes, and the pairing of Akka and Kubernetes is really fantastic”.

What are the unsolved areas on the cloud-native stack, that are evolving above Kubernetes? According to Boner, there are three: application layer composition, stateful use cases, and data-in-motion use cases.

Application layer composition

“People too often use old tools, habits, and patterns, often originating from the traditional (monolithic three-tier) designs that inhibit and constrain the cloud model delivered by Kubernetes,” Bonér says. So what needs to be done is to extend the model of containers, service meshes, and orchestration all the way up to the application/business logic. This way, we will leave the developer with the essence: the business logic and its workflow.

Stateful use cases

Most of the cloud ecosystem is mainly tackling so-called 12-factor style applications. In the cloud, you’re forced back to the three-layer architecture of pushing everything down into the database every time. This happens unless you have a good model and the tools supporting it.

“The value is nowadays often in the data, and it’s often in the stateful use cases that most of the business value lies — making sure you can access that data fast, while ensuring correctness, consistency, and availability.” Boner says.

Data-in-motion use cases

The Kubernetes technology doesn’t yet offer great support for streaming and event-based use-cases.

“Serverless gets us closer to addressing the problem of extending the model of Kubernetes into the application itself. That’s what it’s all about. Abstracting away as much as possible, and moving to a declarative model of configuration rather than coding, where you define what you should do and not how you do it.” said Boner.

All in all, ss the cloud-native stack continues to evolve above the Kubernetes infrastructure level, it will be interesting to see how these concepts play out to serve specific language developers.

Read more on the topic here: https://www.infoworld.com/article/3567648/what-comes-after-kubernetes.html

Last year, the global tech acquisitions deals totaled $634 billion, a 91.8% year-over-year increase, according to GlobalData. This year, the mergers & acquisitions market spun to full bloom from the very early days of 2021.

Here are some of the transactions which will most likely reshape the future.

IBM bought Turbonomic to focus on observability for customers

IBM announced the acquisition of Turbonomic at the end of April.

Turbonomic specializes in Application Resource Management (ARM) and Network Performance Management (NPM) software.

This applies to containers, VMs, servers, storage, networks, and databases.

This acquisitions will help IBM offer a greater range of AIOps and observability options for customers.

Microsoft acquired Kinvolk for its managed Azure Kubernetes Service

Microsoft made a move to boost its capabilities in the Kubernetes space with the acquisition of German firm Kinvolk. This also took place in late April.

Founded in 2015, Kinvolk has been building enterprise-grade tools to help developers adopt cloud-native technologies.

Microsoft expects to integrate the Kinvolk team and technology into the team responsible for its managed Azure Kubernetes Service (AKS).

UiPath purchased Cloud Elements to build more effective automations

On March 23rd, RPA vendor UiPath made an addition of its own, picking up the Denver, CO-based firm Cloud Elements.

Cloud Elements specializes in API integration, similar to Mulesoft and Apigee, which are now part of Salesforce and Google, respectively.

For UiPath, a Romanian-born start-up, this capability could allow customers to better link processes that span various enterprise systems.

SAP acquired Signavio for cloud-native solutions

German software firm SAP announced it’s acquiring fellow German firm Signavio, which specializes in cloud-native enterprise business intelligence for processes and management, in late January.

This acquisition adds Signavio-designed solutions to the bundle of existing SAP software and services aimed at offering customers “business transformation-as-a-service”.

SAP will aim to use Signavio’s expertise around business process intelligence to help more customers optimize these processes as they become more digital.

Qualcomm integrated Nuvia for the 5G era

Qualcomm announced it was acquiring Nuvia in early January. This 2021 tech acquisition led the way to the upcoming M&A operations of the year.

Nuvia was founded by a team of Apple engineers and makes high-performance CPU chips.

Together, the two companies will be positioned to deliver a new class of products and experiences for the 5G era.

You might also find interesting: SMEs rely on RPA for business efficiency

The hybrid work system (WFH & WFO) highlighted the importance of intelligent software robots developed based on Robotic Process Automation (RPA) technology.

RPA served the growing need for creative solutions and activities during the Covid-19 pandemic. RPA is also rucial for a successful remote work flow.

Currently, 23,7% of small and medium-sized companies in Eastern Europe confirm that they have a need for software robots. RPA is used in these companies for digitization of the business and operational efficiency, according to a survey published on a business portal.

The advantages of working with robots – and the downside

Almost half (47,4%) of the companies believe that intelligent software robots are useful to eliminate repetitive actions.

At the same time, almost 30% of companies believe that RPA can lead to an increase in the efficiency of remote work.

RPA has positive effects on the evolution of the business. It also accelerates the development of the company and new digitization policies.

Just as important, companies believe that by integrating RPA technologies they will get lower costs (31,6%) and increased sales (18,4%).

In addition, 39% believe that the support of this technology will leave boring and repetitive tasks to intelligent software robots. Equally, almost 24% say that employees will be able to engage in such more creative activities.

Only 8% believe that the team may be reluctant to integrate this technology into the company’s operational processes. Also 29% believe that software robots could replace certain positions in the company.

You might also find interesting: Brain activity, different when coding from maths

RPA technologies – the reconfigured future  

RPA is an industry that has accelerated strongly in recent years. In Eastern European countries such as Romania, we’ve recently witnessed a rise in global providers of such solutions.

24% of small and medium-sized companies participating in the above-mentioned survey already use automation technologies. Half of them say they are considering integrating this technology into their business this year or in the future.

Nonetheless, 50% of respondents to the study believe that the main barrier to implementing intelligent software robots is the high cost.

Strong suits in the long run? 47% of companies say they consider RPA a viable and cost-effective investment.

Biggest drawback? 36% of companies believe they will face a lack of training of employees on the adoption and use of RPA technology.

Neuroscientists from MIT have discovered that brain activity while coding differs from processing language or doing mathematics.

Coding is matched by many with learning a new foreign language. And, granted, there are certainly many similarities. To the brain itself, however, it seems to be quite different.

Researchers took fMRI brain scans of young adults in a small coding challenge, using both Python and visual programming language ScratchJr. The purpose was to see what parts of their noggins lit up.

Almost no response was seen in the language processing parts of the brain.

Instead, it appears that coding activates the ‘multiple demand network’ of our brains. This area “is also recruited for complex cognitive tasks such as solving math problems or crossword puzzles.”

Yet when solving maths problems directly, slightly different brain activity patterns emerge.

The multiple demand network is spread throughout the frontal and parietal lobes of the brain. Previous studies have found that math and logic problems dominate the multiple demand regions in the left hemisphere. Tasks involving spatial navigation lean on the right hemisphere more than the left.

Coding activates both the left and right sides of the multiple demand network. This counters the belief that coding causes the same brain activity as maths. One interesting fact: ScratchJr activated the right side slightly more than the left.

You can find a full copy of the study here: https://www.biorxiv.org/content/10.1101/2020.04.16.045732v2.full.pdf

You might also find interesting: https://www.vonconsulting.ro/misim-the-ai-automated-coding/

Guido von Rossum launched Python on February 20th, 1991. Python is known as an incredibly versatile language. It is used in developing some of the most popular web applications, from Instagram to Dropbox.

At the same time, it is a gateway language for many in the world of software development.

Moreover, it is frequently taught to schoolchildren and people worldwide who lack any prior programming experience.

Read more details here: https://www.vonconsulting.ro/study-python-is-the-top-programming-language-of-2020/

One reason for the popularity of this programming language lies in its simplicity. Its users do not need to understand compilers or assemblers. They also don’t need to understand other tiny details programming languages require.

Feedback is instant, and Python is improving all the time. In addition to its popularity among entry-level users, Python is rapidly becoming a priority within the business environment. It has also found favor for serving as the ‘gluing language’.

Large development projects always have a trade-off between scale and speed. The typical software stack that a large organization uses every day may include code written in several different languages. Moreover underlying data may be stored in numerous formats, languages, and locations.

In such environments, Python has taken root as a subtle, but powerful way to bridge between different applications and code libraries.

When Python is used as gluing code in compiled languages, development cycles are shortened. Results are made more interactive and are quicker to observe. At the same time, the delays caused by things such as long compile times are eliminated.

Researchers from MIT and Intel have created MISIM, an algorithm that can create algorithms. What does that mean for software developers?

For most of us, writing code is like learning a foreign language. But no more! A team of researchers from MIT and Intel are looking to change all that by building a code that will write code.

The new technology is named MISIM (Machine Inferred code Similarity). MISIM studies snippets of code to understand what a piece of software intends to do. It uses a pre-existing catalogue of codes and it can understand the intent behind a new code.

Will this actually help software developers? The Intel-MIT team says yes. MISIM will help developers working on software by suggesting other ways to “attack” a program. MISIM will also aid them in offering corrections and options that will make the code more efficient.

The principle behind MISIM is not new. Technologies that try to determine whether a piece of code is similar to another one already exist. They are used by developers, but they focus on how code is written and not on what it intends to do. MISIM can act like a recommendation system. It suggests different ways to perform the same computation – that are faster and more efficient.

Software development becomes more and more complex. Technologies such as MISIM could have a significant impact on productivity. This was the opinion of Justin Gottschlich, the lead for Intel’s machine programming research team.

More details about the MISIM algorithm here: https://www.zdnet.com/article/software-developers-how-plans-to-automate-coding-could-mean-big-changes-ahead/

You might also find interesting: https://www.vonconsulting.ro/study-python-is-the-top-programming-language-of-2020/