DEV Community

Cover image for Will engineers get replaced? Here's your answer.
CrackJS
CrackJS

Posted on • Originally published at javascript.plainenglish.io

Will engineers get replaced? Here's your answer.

First off, it won't replace developers. It will act as a tool for clever and more self-aware developers, but it will terminate the remaining entry-level ones that are not cautious enough to change their behaviour and strategies.

Additionally, take AI + Software Engineering career pieces of advice with a grain of salt, including mine, because it is a hot topic right now, and most influencers are using reader sentiments to get more attention, but business runs as usual. It also depends on whom you ask.

We all know AI has started playing a significant role in the Programming domain and affects thousands of Programming jobs worldwide, but how do you know whether you will get affected by it? Well, I'll help you decide that and make strategic plays to become a Programmer that no one can replace and only learn from.

PS: This piece of writing could be one of the biggest ones you've ever read, and the value or ROI of this will be as valuable as the time you invested in reading it. If you want, you can divide it into multiple reading sessions. However, I suggest reading it till the end if you wish to take advantage of AI, understand the current job market, understand the history of such advancements and desire to become one of the few developers remaining in the top tier.

The people who wish to get laid off, fired, or gotten rid of, will avoid reading the entire piece, and I don't want you to be one of them. If you are a premium member, listen to it as a podcast, and this is your all-in-one answer sheet for all your concerns.

Table of Contents

If you don't want to know what AI can do or the types of it, you can skip this section. However, I recommend reading it to get the hang of the current situation to advance your career. After all, the entire point is to understand the capabilities of this marvellous innovation.

Introduction

You might have heard, "AI is here to stay," which is true. Truthfully, AI should stay to improve our lives and make it easier, but in a beneficial manner. However, we must understand what AI is and how it works, including the subtypes of it. How can you defend yourself or remain prepared if you don't know who is trying to attack you? AI is a friend, not an enemy, so don't treat it like one.

What is Artificial Intelligence?

In its simplest form, Artificial Intelligence is the process behind making machines and computers artificially intelligent via training them based on Human Intelligence with Machine Learning algorithms, Deep Learning, and more. We are also using AI or computers to understand Human Intelligence and tasks such as interpreting human speech, identifying patterns, etc.

It is the study of computers and machines trying to adapt human intelligence to understand our thoughts, make things easier for us, and recognize objects related to humans. AI deals with the process behind making computers capable of performing intelligent operations at a quicker speed.

It gets a set of inputs and throws an output based on the situation, resources at hand, trained data of human intelligence, and more for problem-solving. We are trying to make machines more intelligent to make decisions for us or assist us based on our data. AI can also model and improve itself beyond the capabilities of humans after understanding us, but we don't have practical proof of that yet.

According to Alan Turing, an AI model would be a machine or system that acts like humans. So far, we've only witnessed the applications of Weak AI, such as Siri, Alexa, ChatGPT, and others.

Weak AI & Strong AI

ChatGPT cannot learn on its own. It depends on the data used to train it. A model that cannot observe and learn beyond its trained data is called Weak AI, also known as Narrow AI. These types of models perform specific tasks. It is the practical representation of a model that can speak to humans, respond to humans with the same level of human intelligence, and make decisions the same way a human would, but with limited knowledge.

However, Weak AI models aren't diverse. For instance, if I ask a chess-based engine or model to play Online Poker, it will fail because it is weak and can only perform specific tasks based on the precise trained data. Take the example of AlphaGo, which is extremely good at playing the board game Go. If I make it play chess, it won't even remotely come close to performing that task. Based on this, I assume you can predict what a Strong AI model can do. If not, let me explain.

Strong AI is a generalist superpower that can perform any task, whether text or image generation, based on input from the user. For your reference, Strong AI, also known as AGI or General AI, is still theoretical and does not exist to this date. They are self-aware, can learn from us, develop emotions, learn accountability skills, etc. Take the robot from the Terminator series or the computer scientist from 2001: A Space Odyssey as an example.

It can plan for the future, solve problems (not with guns, possibly), and learn as it continues to exist in the real world. The intelligence of AGI models will equal Human Intelligence. And ASI (Artificial Super Intelligence) could surpass the intelligence and ability of the human brain. However, this is all theoretical for now. Since we can only witness Weak AI, how does it work in the first place?

How does a Weak AI work, such as ChatGPT?

AI works on intuition and within the constraints of its trained data. It assumes or predicts the next word to make sense of what's right in this context based on the patterns it learned from the data around the Internet. It doesn't know whether this is right or wrong, but AI spits it out anyway because that is what it got taught to do. It doesn't have strong reasoning around its answers; simply a sequence of words based on patterns. This process is known as Imitation Learning, wherein the AI model takes the existing code, analyzes it, and adds more based on the found patterns. Weak AI acts as a proof of concept to display the potential of AI in the future.

An AI model, especially the weak one, gets trained on tons of information from the Internet. Although, it depends on the type of model. For instance, a document-analyzing model gets trained on PDFs, docx files, etc. However, models like ChatGPT work as general text generation bots and require data from the whole Internet.

A corporation, like OpenAI, for a model like ChatGPT, makes partnership deals with other data-driven firms that hold tons of user data, like Google, or code programs, like StackOverflow. Otherwise, they scrape the Internet or crawl it using other technologies to get more data. They feed this data to the model using ML techniques, and the model learns from this or observes it to create patterns.

After it creates common patterns, it uses them to speak with humans or respond like humans. For example, take the behaviour of humans greeting each other. The model detects that conversations begin with a kind greeting like, "Hey," and the response usually comes up as "Hello! How are you doing."
When the model enters the real-world or testing stages, it uses those common patterns to respond to the users and learns new ones while talking to real people to enhance itself. This process is reinforced learning, and we'll talk about it soon.

The words and sentences it throws at us generally come from those patterns, and the model makes a gamble to check whether the response is correct. If you noticed, each ChatGPT response has two feedback buttons to rate the reply. If you give them a thumbs down, the model will go back to its database, modify that pattern, and let itself know for future interactions that this sort of response doesn't work for such questions based on a lot of feedback and vice-versa.

From the ChatGPT 4.0 version.

Ultimately, that is how an AI model works. You give it data, and the model learns or memorizes it and responds to you based on the observation it makes during the training phase.

The model does not know whether the actions are ethical or correct. However, it knows that these actions give definite outcomes that a user requests based on its patterns and commits to those actions without knowing the consequences, which becomes a security concern.

However, you might say you read AI harms people and ultimately displaces them. So, how does it even help us?

How can AI help us?

AI will help us automate mundane and repetitive tasks like writing boilerplate code, detecting errors, code review, documentation, code templates, software testing with specs, writing code faster and more accurately, performing basic CRUD operations, creating a landing page (average ones), and more. It will increase our productivity over the years and allow us to focus on creative, original, and complex solutions. It can provide suggestions, code samples, and diagnose issues at a lower straightforward level, but it can improve efficiency in distinct ways.

It will create new job opportunities for people who specialize in prompting these AI models to help engineers. We will need the folks who train these models, improve and maintain them, and ensure that AI models remain ethical. At this point, we're overestimating its intelligence beyond its designed functionalities.

AI will also help us write code by predicting the upcoming keywords in real time. Take Github Copilot as an example, powered by OpenAI's Codex technology. It can suggest JavaScript, Python, or any other language code to increase productivity, but with equal risks and cons of leaking information or getting yourself into legal trouble.

We focus on solving impactful problems and allow these tools to write code for little issues or features. Developers use AI to quickly check for bugs, errors, and defects before shipping the product to the customers. As companies often rush to release their software to beat the market competition, various bugs get neglected during the entire process. We can use AI to solve those bugs before the users experience them.

AI will unquestionably assist us in code optimization and security and suggest more efficient approaches based on the patterns detected by them with the pre-trained data. Additionally, it eases the responsibilities of Project Managers and the financial team to set a budget that often gets extended. AI can help us to construct accurate budgets, deadlines, plans, documents, and estimates based on past projects.

Since the compiler is picky about semicolons, statements, and particular keywords in certain positions, models like Copilot help us solve that issue by avoiding the compile-time errors entirely when AI goes through your code, improves it, makes changes, adds the missing statements, and so on.


I divided the writing into two sections - the problem and the solution. Let's begin by understanding the matter at hand to get the answers to all unanswered questions.

Who will get affected, how, and why?

Let us look at the people who will get affected because of these new advancements, including the impact of it on the salary or income of software engineering folks. Additionally, I will provide some disadvantages of AI and where it lacks.

The media describes AI in many ways as harmful or the next tremendous advancement or affirms that it can replace engineers. But let us look at the facts of those claims with historical precedence, legal issues, etc.

Who will AI replace?

The ones who merely performed mundane and repetitive tasks, such as software testing, CRUD operations, documentation, etc., will get replaced first, including the ones who coded unchallenging solutions without complex creative thinking and the ones who coded static applications with HTML, CSS, JS, PHP, and other simple languages, including myself. I merely survive these storms because I am in multiple industries, and I combine them, so these advancements don't affect me, but that topic is for another day.

Even devs who created chatbots, such as Discord bots using Python or JS, can get replaced if they are not building something that solves complex problems. Ultimately, the repeated code solutions on the Internet from open-source projects will act as building blocks that AI models can provide quickly to build intricate solutions. If you are a developer who decides to create a bot like Midjourney, you will survive any advancement if you keep upscaling yourself, which I hope makes you understand my point.

Existing AI models can create anything available on the Internet or Stackoverflow with evident constraints, like errors, which you can fix if you know the fundamentals of your choice of language.

We can omit JavaScript & Python from the repetitive list because we can use it to make every possible application, including AI models. In all of this, even soft skills matter with Programming experience. Entry-level developers whose tasks I listed in the previous section, "How AI can help us," will also get displaced.

AI has shown its potential by trying to create basic static websites (not with good UIUX yet) and tasks like documentation of programs, or small features, which junior (L3) developers would handle. For example, take writing code to create encryption logic for an entire authentication server representing a small chunk of an enormous project.

These small chunks of enormous projects will get delegated to AI models. Additionally, if you are in a situation wherein the senior devs create the design (architecture) or the flow of an application, which you follow to create a feature, including resources to make it a reality and if you represent a small part of a large company, you must begin to upscale yourself straight away.

In this clutter, AI only gives a chance to high or medium-level developers with 5+ years of experience in creatively building software, understanding user requirements, and designing the architecture or software. Only developers in the Feature Engineering department will get a slight edge over low or entry-level developers.

However, this doesn't mean if you're a CS student or entry-level engineer, that you must quit engineering. No. Instead, you must use AI to your advantage to speed up the learning process, lower the learning curve, and quickly gain the same experience as a high-level engineer, and we'll talk about this in a bit.

For the uninitiated, Feature Engineering means implementing complex features with raw data, like combining Google Bard with Google Search, which involves re-designing the existing architecture of Google Search, refactoring the code, collaborating with other developers, creatively figuring out an efficient solution, taking feedback from the beta testers, and implementing it using the right skills to avoid bugs, even if AI can help us eliminate those bugs eventually.

AI keeps improving, and developers must track these improvements to stay ahead of them instead of getting crushed by them. In terms of JavaScript, it cannot replace developers with a strong foundation because as mundane developers get replaced, the demand for talented developers will increase, and there's a common trait between all seasoned senior programmers or mathematicians - They have their basics sorted.

The priority will shift from hammering nails into a house to building the house with robots hammering the nails. By the way, the low-level engineers are hammering the nails for now.

AI will allow Programmers to focus on high-level tasks, and only skilled developers can perform high-level tasks, such as implementing complex features. The rest of the work, like documentation or testing, will get handled by the AI, and it will only create opportunities for the people willing to work with AI and integrate it into their workflow.

Even though AI will not replace all Programmers, it will eliminate most mediocre developers with 1.5 to 4 years of experience. It will impact the job market, and we'll discuss job opportunities soon. To understand this better, take the example of Canva.

Canva for Designers

When Canva was initially released, people assumed even designers would get displaced. Their jobs were in danger. However, Canva helped designers get more clients and increased the value of high-level designers, those who focused on aspects other than designing, such as conveying messages to the readers, showing emotions through designs, allowing users to communicate with it, etc.

Due to the release of Canva, people began to appreciate the importance of design, and the ones who couldn't design were now able to get their foot in the door. Now, apply the same ideology to software engineering with the data provided in the previous sections, and you'll understand how everything works when a few aspects of each field get automated.

The low-level designers who made basic logos were eliminated and gotten rid of when Canva showed up at the party. In the same way, mediocre developers building static sites, no-code tool makers, low-level back-end devs, and others will get replaced.

Nowadays, no-code tools have AI models built inside them, so it becomes difficult for developers working with no-code tools to sustain themselves as AI can do their job. These developers switched from coding to no-code and are now getting replaced by another technology higher than them. So, advancements do occur, and parts of a field are automated. It is all a part of innovation.

By the way, mediocre developers cannot perform Feature Engineering, and even ChatGPT cannot do it, including GPT 4.
When Mediocre developers get replaced, the high-level ones will get their work accomplished easier without any intermediaries required, thus increasing their productivity. AI models don't understand the brush strokes of an oil painting. Instead, it merely understands that an image with the tag of an oil painting gets designed in a particular pattern. It generates results based on those patterns.

Even Shazam uses AI technology to detect music. Instead of listening to the entire track, Shazam catches a set of highs and lows of a song to get to a conclusion. Shazam first analysis millions of songs, keeps the data somewhere, and then matches it with the input from the user. It only understands a set of patterns.

Internet & Open-Source Limitations

For now, AI depends on the Internet, and the codebase of applications like Uber is not open-source, so the solutions of the developers in Uber remain hidden from ChatGPT. It can only access the code samples from GitHub that are publically available unless Microsoft, which owns Github and has a holding in OpenAI, acts as an intermediary, which will create a huge privacy concern and the abandonment of Github.

If you ask it to build something like Uber, it won't be able to make it in the first place, but it will try to create something from the resources at hand on the Internet, which requires your intuition and knowledge to fine-tune it, or even develop it in the first place. If you expect to build large applications and algorithms with GPT, your head might be in the sand, but I'm open to opinions.

Since I discussed designing, let me discuss the Dev Mode feature in Figma that can generate code based on the UIUX design. Also, people doubted developers when this feature came out too, and important to point that out.

Figma introduces Dev Mode

The dev mode in Figma recently launched, and many folks slammed the existence of developers yet again. Figma is trying to make development easier for engineers by merging UI/UX design and front-end development work. You would still need a developer to use the suggested code by Figma to make the application operate appropriately.

Not only that but if you're making something like Uber or any application that requires back-end work, figma falls short. As a front-end developer, Figma provides slight help with the new feature. However, most of the time, the code generated as a snippet for these elements is generic and not optimized to implement them efficiently.

Thus, It requires me as a developer to fix the suggested code myself and implement it myself. I tend to avoid using that feature for the same reason.

Additionally, It doesn't generate code for the entire application or website. Instead, it provides code snippets for different elements within the design with generic functions, properties, and structure. You still need someone to put everything together and run the show. But as I said earlier, these functions are there to help developers. We're beyond only writing code 24/7.

Figma tried to make a bet and take advantage of the current publicity around code generators to bring designers and developers together. Even though they wanted to bring developers closer for a long time, they released the feature at the best time possible for greater exposure. I appreciate this clever move on their end.

The real question remains, "Would corporations embrace AI?"

Will corporations embrace AI?

While engineers work in large corporations, most of the code remains hidden from the general public. As these companies try to embrace AI, they may expose code to competitors who have invested in these AI models. Otherwise, corporations may face heavy consequences for automating AI models. The model could make the code open-source, leak credentials, affect the users in a harmful way because it does not have accountability skills, and so on. There is a difference between fiction and practicality.

Let's look at a few factors that could affect corporate decisions to implement AI into their system, whether they will bear the costs and consequences of replacing some engineers for the price. Sometimes, what seems rational never becomes practical.

Practicality > Rationality

When speaking practically, software companies won't risk depending on or shifting to AI technologies to handle their core functionalities while firing software engineers. See, AI lacks in a lot of areas, including liability. AI doesn't care about the consequences of its actions yet, and companies won't risk an intuitive technology with its software development codebase.

AI is also very expensive and time-consuming to implement. Considering corporations follow the "Use it until it breaks" rule, this could be a solution difficult to implement. Companies will reduce their employee headcount by eliminating the people who contribute to ordinary systems since AI could do their job, but unsure at what cost. Tech companies are known for cutting costs whether an AI comes around, but AI gives them another reason to remove repetitive people.

AI lacks understanding and reasoning for its actions. It performs and processes requests based on its trained data and usually strategical patterns. It may generate responses to your prompts but will lack logic and reasoning. Hence, it won't remain practical. AI relies on code written by humans.

The algorithms used in AI models are statistically data-driven, and the solutions provided by AI are only up-to-the-mark if the data used to train it says so. However, this data can get biased or incomplete as tons of information on the Internet is garbage. In that case, AI may produce substandard code or make mistakes. Corporations will still take adequate time to implement these models in their systems. Take the Samsung and Microsoft problems, for example.

Exposing Code & NDAs

Take the incident of April this year with employees in the office of Samsung, Korea. The engineers exposed their private NDA binded data or code programs to OpenAI's API for code generation and debugging purposes from a semiconductor DB.

Some companies cannot directly delegate work to AI research companies, and they'll need a middleman supervisor or engineer who can understand the code, put it all together, and make it work. For the same reason, Samsung is building its custom AI model to avoid these situations, where engineers like you and I will help. Ultimately, each company will create their AI model but will not delegate enormous tasks.

Funny enough, Samsung disclosed their non-open source code to OpenAI and gave Microsoft free access to their code simply for Samsung to mess up. Not to mention, Microsoft is an investor in OpenAI, thus getting access to the data. Even if there's a privacy policy for Microsoft to avoid looking at Samsung's code, Samsung still voluntarily gave its code to a competitor. In such cases, companies will not directly trust other corporations. They could build their respective AI model, but it is not feasible for them, and the benefits of keeping employees weigh greater than adopting AI.

By the way, the code Samsung engineers surrendered to OpenAI could get exploited to improve the future responses of respective chatbots since these models detect patterns and learn from the answers thrown at them with specific prompts. OpenAI's user guide states that users shouldn't transfer confidential data, which gets stored and submitted to its training system. However, Samsung employees attempted to mess up thrice before the corporation instated a ban on using it.

Well, I also have another issue to point out about Github Copilot. It can generate code, but not necessarily in the most promising way possible.

Github Copilot Issue

Github Copilot cannot replace Programmers yet because it does not have the ability to think, execute code, solve complex problems, or generate new ideas. Copilot steers developers to risk getting hit with copyright law matters. That is a different controversy.

Apparently, GitHub & OpenAI got accused of remastering open-sourced code from developers that licenced it not to get used further. Unsurprisingly, Microsoft has an enormous position through its investments in OpenAI, and additionally, Microsoft owns Github, which is known to hold a lot of open-source repositories. You can see the problem here.

Nonetheless, corporations will have to pay a hefty amount to implement AI into their systems, which could not bring additional benefits for them. The ones who can spree money on AI technology will continue to do that, but most will integrate it on a ground level.

On the other hand, some folks argue that AI can solve leetcode and interview questions of Software Engineering interviews, so are engineers helpful? Well, allow me to explain.

Interview & Leetcode Preparation

AI is already extremely good at solving leetcode problems or projects, especially with ChatGPT 4.0, but, in reality, the interview questions are not relevant to engineers once they start their job. Ask an engineer around, and they would say the same.

The interview questions are to test you like JEE or NEET entrance exams with difficult Maths & Science MCQ concepts to inspect your thinking process and capacity to learn in a short duration, which means that AI passes the interview effortlessly but fails to perform the actual Programming process of collaborating with other members to find a solution.

For instance, take college degrees as an example. Companies merely ask for degrees and percentages to check your capabilities in a short duration of usually four years and how you can perform under pressure. It has nothing to do with your intelligence. Employers tend to have a gut feeling with graduates that they will already have collaborative skills, problem-solving skills, continuous learning attitude, etc.

Even if AI can solve leetcode questions, this argument doesn't hold water. AI can generate leetcode solutions because the answers to those questions are available or limited to the Internet. These models are trained based on data from the Internet, and I believe that it still doesn't have a system to adapt to new questions.

So AI simply copies and pastes the existing leetcode solutions with a few modifications from one source to another based on whether the words match rhythm, not logic. Even if you copy code from GPT to answer interview questions, you can not answer the spontaneous questions thrown at you.

LLM (Large Language Models) memorize everything or ultimately act as rote learners, and as we know with humans, the people who learn by heart end up in the ditch. Problem solvers can adapt to new challenges, and AI lacks that capability. It remains limited to the walls of the Internet.

Your experience while solving problems holds more value. Even Chris Lattner says that as much as solving problems or building applications involves coding, Programming also requires and equally gives importance to working with people, which means collaboration. It is about working together to understand the product, requirements, the emotions used to invoke customers to buy the product (build strategies), etc.

You must ship out a product that other developers can work on. It must be reliable and maintainable. Otherwise, fellow developers working on the same project will face difficulties, and the main objective is to strive for collaboration. When developers get more productive, more jobs get created in all different areas because developers form solutions to problems, and the management involved in that solution hires other employees to keep everything going. AI cannot generate reliable or maintainable code, and I will explain why soon.

Answers to leetcode questions are usually in the training data of GPT models. It is not solving the question but merely memorizing (repetition) and spitting out the answer from the patterns detected earlier in the training phase. Technical Interviews are not about difficult leetcode questions, but they are there to check your problem-solving skills. Not to mention, usually the code contains bugs.

Code with Bugs & Errors

AI researchers are making subtle advancements and demos to freak people out and force people to obsess over it until the next big thing, but this seems like the Web 3.0 bubble. Back, people thought web developers would become obsolete by website builders and no-code tools, but I continued to earn multiple six figures alongside these builders.

AI models get trained on buggy, inefficient, and useless code from the Internet. We, humans, write inefficient code many times. All that incorrect data gets dumped on the Internet, which these models use to give us answers.

AI propagates those bugs to its users, and users face issues trying to solve bugs because most don't even know how to write "Hello World" in JavaScript, yet they wish to create a chatbot.

The code AI generates isn't directly practical to you unless you hold knowledge or awareness of the generated code or program. In my book, I try to promote the same ideology. Even if you try to use AI as a tool, you must know the fundamentals of it to use the code AI generates and to solve those problems.

Now that we discussed corporate issues, code and text generation issues, etc. It is time we look at the impact it brings on engineering salaries.

Impact on Salary & Job Opportunities

The job of a Programmer requires a lot of core fundamental knowledge to build solutions. The role of a problem solver remains demanding, and highly senior engineers are those problem solvers that are hard to replace due to their extensive experience building applications. Experience brings solutions to problems that don't even exist on the Internet.

If you have knowledge of something that does not exist on the Internet, you are a gold mine. But if the Internet knows what you do, so do AI models.

The developers starting or planning to enter the space will get paid relatively less based on their experience building applications and current knowledge of the field or a language.
Once you get that out of the way, build complex applications and software to learn different ways of solving a problem, work in a team, and gain experience like in a startup.

Companies pay unimaginable millions of dollars to create applications using JavaScript for everything existing on this planet or Python that gets used in Data Science or Automation. You could learn one of these languages with other points I cited earlier, and some work may get transferred to specific countries, which could affect the overall salary.

When AI advances more, beginners (L1 to L2) won't get to solve low-level problems. Instead, AI will do it for them. They will focus on fixing giant issues or managing AI models like L4 & L5 folks responsible for managing entire projects while delegating work to L3s, L2s, and L1s.

Furthermore, L1s and L2s (Beginners or Interns) won't exist as they do nowadays. People will have to quickly learn and reach the level of L3s directly by building projects and gaining experience privately, but in the correct manner, as I'll explain in a bit.

The job openings anticipate declining by 10.2% by 2031, which means that if you don't gain extensive experience and knowledge of the fundamentals to build complex problems and stand out from the competition to create your unique category, you will have a tough time fighting the crowd.

You can combine skills like Entrepreneurship with Software Development & UIUX designing like I did to stand out, and here's the declining graph even though there are some years, such as 2022, wherein the developers couldn't fill enough jobs -

Data from Bureau of Labor Statistics

Beyond the point of automation, people think that the government will step in to control the job opportunities between AI and Humans, which I agree with. Instead of AI taking a slice of the cake, the cake will become bigger steadily, and new jobs will occur out of thin air. When a job gets overtaken by someone else, history suggests that new job opportunities come into existence, and the number of jobs increases in other sectors related to tech, or new sectors emerge in the tech industry itself.

For example, take the data provided by WEF. They predict certain positions will get displaced and others will get more demand, which could help you understand the ratio or rate of opportunities.

Future of Report 2020

If truth be told, most engineers are interns and low-level employees in corporations, so the demand for high-level ones should increase.

If developers become more productive, demand should increase since services will become more affordable and faster delivery for even smaller brands and companies to develop high-level software for their customers. If things become more affordable, we could see a decline in overall salary. Nevertheless, we expect a steeply decreased stipend because of the recent VC-funding issues, but we'll get to that in a bit.

In terms of Job Opportunities, mediocre developers have lower chances of getting a job anymore, which means that everyone needs to upscale themselves quickly. Mediocre refers to developers whose responsibilities were limited to implementing basic features with a combination of routine tasks.

Only people with a lot of experience and knowledge of the basics will stand out to work with AI. If you're a student or someone who doesn't have enough experience, it's time you buckle up and create lots of projects to gain experience. You can build complex solutions to gain experience. You learn or adopt tons of skills while creating projects independently, and these skills pay off during your interview, client project in a corporation, or in the process of creating a business.

The ones who got hired and entered the tech industry with the bare minimum experience and a few leetcode questions will get dismissed and were only taken into the company to solve mediocre, little, or mundane problems when the VCs (Venture Capitalists) were pouring boatloads of money into tech companies.

However, the ship sunk with those layoffs as the indicator of the money drying out. Realistically, the government will soon step in to regulate the use or creation of AI or help the displaced workers with an incentive/scheme to cover up the loss in employment over the gains made by the economy.

While AI advancements emerge, we will witness more PMs (Project Managers), designers, and admins than low-level developers because most experienced and seasoned developers tend to become PMs and Staff Engineers from L5 to L7 in a traditional expedition. The low-level engineers will get replaced, and the high-level ones will get promoted to rather complex problem-solving work, as it did back in the day, but faster now.

If anyone wants to make a position for themselves in the remaining category of developers, it should be you. Even if AI tries to replace developers, we need humans to supervise the work of AI. We will witness such administration opportunities soon if AI displaces jobs in the masses. So, we could see a rise in these supervision jobs if corporations decide to adopt them.

The 2022 layoffs had a significant impact on the salaries of distinct developers. However, a report called "The Hired 2023 State of Software Engineers" noted the wages increase for remote jobs in San Francisco Bay Area (SFBA). However, if you are into software development only for the money, or a remote job, you got a challenging situation if you do not have the aspiration to solve problems.

The layoffs shifted the attention of corporations and real-world companies to more experienced developers who could contribute to the workforce rather than perform mundane tasks. By December 2022, 72% of interview requests went to candidates with six or more years of experience. While talking about wages, we should also understand why the recent layoffs realistically took place through a financial angle, including why the media decided to threaten engineers.

Media & the Financial Perspective

Media used this hot topic to their advantage and hyped every detail of this new technology or advancement, including the side effects of it, such as layoffs. But they forgot to cite the part where these employees were mediocre low-level engineers, and most of them got replaced due to the easiness of their job. Even a robot can do repetitive tasks, which computers are good at.

Besides, since I look at situations from a financial standpoint, most of these layoffs were because of times in the investing lingo called investment winter, when companies or startups, especially tech ones, run out of VC money and decide to fire software engineers to cut expenditures and save money.

They did this because they over-hired engineers and overinflated their salary when investors poured money, but it all came down one fine day.

Even though these engineers were convenient, most were helpers, and the company didn't need to depend on them to run effortlessly without them. Although most employees laid off during this period comprised logistics and management folks instead of engineers, even these engineers got labelled as the ones laid off due to AI, which forms a lot of misunderstanding, or in other words, publicity for the media.

The absolute layoffs took place because of the Investing Winter. However, if engineers get fired in these situations or events, they are mundane or medium-level engineers. Companies can survive without these workers, but I strive for us to prevail in the dominant league, wherein companies cannot fire us.

I believe influencers have hyped the entire launch of AI. Beneath the underlying truth, it is still a tool and remains one, much like a calculator to a mathematician. But we are not done yet with the media. Students and engineers are mainly threatened by looking at demonstrations and videos of AI creating websites and software because of influencers and non-engineering folks attempting to build applications using AI tools.

However, let us find out the reality behind these videos with their real intention.

The problem with demonstrations and claims

Most tweets and videos concerning AI on platforms like Twitter, TikTok, and YT shorts trying to perform an engineer's job were mundane tasks.

Since Elon changed Twitter to his favourite old name from the 2000s, named X, I think tweets can now get used to represent standard videos and 250-character posts. Nonetheless, take the tweet below as an example.

https://twitter.com/ammaar/status/1679939953956929538?s=20

Basically, this is a mundane task meant for someone in their early engineering stage. A novice learning HTML & CSS (or .NET) can do this on the Web. I am not proficient with Mobile Development, so I cannot say this is easy for them. This example shows the potential of an AI model, such as Bard, but it is not enough to claim that AI can replace engineers.

Ammar did not claim that AI will replace developers, but I use his demo as an example of people intimidating developers with videos of these mundane tasks. The fact Bard could understand and process the requirement using the image amazes me. However, at this stage, I despise people coming from a non-engineering background claiming these models can replace developers.

The author, Ammar, has experience in mobile development, especially with iPhone applications, for quite a few years. Therefore, he was able to solve any errors that came his way. Notice how (swiftly) he fixed those errors? He had previous knowledge in the field based on my research.

And this is why I emphasize that we require someone to operate or supervise these models to fix those errors, preferably someone with adequate fundamental knowledge. These are the current capabilities of Bard, and this represents one of the rare cases when Bard does not hallucinate.

At this point, we have not reached the stage of replacing engineers yet. Tasks like the ones shown by Ammar will get automated by AI. We need not require beginner-level developers to perform those tasks anymore since AI will replace them soon.

AI models, especially ChatGPT, are repetitive. Even the websites generated by an AI become repetitive. After all, they got created to replace repetitive mundane tasks. You can take no-code tools or website builders as an example because most look alike.

I have a possible theory, but while I tested ChatGPT for this piece, I noticed a pattern. I noticed that it kept repeating results, words, or phrases that prove it is not as creative as humans or accountable for its consequences.

As I've seen people like Adam Hughes argue that AI will replace programmers, Adam provides a YT video of another creator that built a "Guess the number" game in Python using AI.

Now, the demo is pretty simple, but Adam far-fetched it because of his excitement towards the advancement, but it ends up scaring developers, and it is crucial to show the other side.

A Python-based AI bot made this. Adam Hughes used this example to state that AI can replace engineers. The bot was https://youtu.be/L6tU0bnMsh8.

You can see the resemblance of this example with the previous one by Ammar. AI can provide skeletons of large applications but cannot build full-fledged or usable applications.

In the upcoming years, we could see AI possibly shifting from these skeletons to mid-level games that kids would play, but video games, especially the ones played on Desktops, will take more than a decade. Look at the complexity required to create those sorts of video games.

Even a first-year diploma student can write code for the game cited above by Adam Hughes in MS .NET with a few lines of code in a few minutes, jokes aside.

The statement regarding replacing developers completely with these samples indicates that those individuals haven't been in the industry long enough, even though Adam claims to have. It seems contradictory to show the current progress and achievements of AI while simultaneously claiming it can replace developers. You still require a developer to use these tools and make those applications.

Nonetheless, during the process, I read a lot of articles, and one of those articles from a major publisher used sources and headlines from nearly a decade ago to inform or instead scare people concerning heavy layoffs because of the new AI technology.

Ironically, when I dived deep into those allegations, it turned out that employees did not get fired in the first place, and the employees that were going to get fired were the ones responsible for mundane "software maintenance activities," as stated by their news source. These tasks include software testing, approving and granting loans, and others that could get replaced effortlessly.

The writer of that piece used the word "might" to indicate their prediction of layoffs with a bold reference to their sources from 2016. If a student in the field of CS reads that not carefully word-by-word, unlike me, the piece would already become misleading. Nevertheless, I request you to remain careful about these claims, as most merely want readers to stay hooked on their articles.

The sources usually don't explicitly mention that employees got fired but instead predict based on anonymous leads. By the way, my sources are always at the end of my articles if anyone wishes to correct me at any point.

While reading these blogs or pieces, you should double-check whether the mentioned executives or names like Sam Altman realistically said those words as stated in it. During my research, many writers used famous names to make their points, and most didn't even make those statements.

Essentially, you can start to see the wrong information that has gotten around the Internet, which AI models use to train themselves. Imagine if AI uses this inaccurate information to give answers to its users, which it does to an extent. I request you to avoid such articles and perform fact-checking or small research by yourself to stay away from such claims and avoid any issues later on.

Is GPT4 better?

For the people shouting that GPT4 can do better, but no, it cannot. It is a bit more reliable than the previous version (GPT 3.5), but it doesn't perform what we ask it to do in most cases and keeps hallucinating like a standard AI model.

Aside from the small steps taken to increase the advantages of AI models, GPT4 can only create generic websites, games, etc. Plus, the code is from the Internet itself, so not writing its code yet. Although, that depends on the prompts of the user.

ChatGPT4 randomly stops while giving answers, and another problem I discovered during my research - GPT4 code programs usually combine everything into one file. Basically, it combines JS & CSS into the HTML file with Inline JS, which contributes to security and efficiency threats to our website, and style tag that comes with its disadvantages.

Ultimately, it eradicates readability. Unless you ask GPT to separate them, it won't do it. And unless I'm a developer, aware of the consequences of keeping everything in one file, it could create a few problems for me. GPT4 cuts off code and continues re-writing the entire code, but that could be me carefully picking its mistakes.

Besides everything, the current demonstrations of the capabilities of GPT, Bard, or any other model are overhyped. Logically, people without prior knowledge cannot use it to its fullest extent without risking a few factors because of those issues I cited.

If I weren't a proficient developer, I wouldn't know that this type of environment provided by GPT4 could produce security problems. A novice will face difficulties here, and AI models continue to throw errors at their users. Even GPT4 creates faulty games. If you can not fix those issues, then you are in jeopardy.

These tools seem like a way to generate code for plain ideas as a base and further increment its capabilities for engineers to get started more quickly than a miracle for non-engineering folks.

If you watch the video of Dave Lee demonstrating GPT4, you'll understand the mundane and basic tasks I'm talking about. I created a similar feature like Text Reader at 11:00 of Putting GPT-4 to the Test - Can it code a complete game from scratch?

History repeats itself (Historical Precedence)

Whenever we encounter such advancements, some jobs vanish while others emerge. You might have heard about "Prompt Engineering," right? It's a new concept or job position coined by AI folks.

For instance, the Dot-Com bubble burst in the early 2000s. It made people think this event marked the end of the fast-growing Internet sector. But despite the downfalls, the burst resulted in new specializations, including front-end development, that formed my career as a serial entrepreneur.

For my financial perspective on layoffs, take the Financial Crisis of 2008 (The Great Depression) example. It resulted in tons of employees getting fired and companies going bankrupt due to a few folks who decided to play with the odds of the real estate market failing and incorrect bond ratings for cost-cutting purposes.

But it helped businesses to re-think their strategies, make more strategic decisions and spurred the creation of new jobs. If you desire more evidence, consider the early 2010s when mobile devices entered the development space.

Many raised concerns that desktop-based engineers would become less relevant due to the launch of mobile devices. But today, we use both devices and optimize applications to work appropriately on each device, with slightly more importance given to mobile applications.

These mobile devices created new jobs and new fields for developers and designers. Another example could be the recent Covid-19 situation.

**When we keep replacing jobs, we create new jobs. We don't realize that as these new tools emerge, users begin to assemble higher standards and requirements for which more new tools get innovated. **Innovation never stops because something that was impossible years ago becomes a reality. When impossible challenges like these become a reality, new challenges appear, specifically massive ones.

According to a 2020 report named "Future of Jobs Report 2020" from the WEF (World Economic Form), they predict that 85 million jobs will get displaced by the shift or replacement of labour between humans and machines. However, additionally, it will create 97 million new roles in the new division between the same parties.

Let's take a scenario where all developers get replaced. Everyone has AI tools, or only AI is coding programs, and engineers are in the back seat of the entire situation. What would happen then? Would two AIs compete together? The answer to this question is in the following Notion page. I realized it would make this piece too long with all the research information.

So, I separated it for folks interested in reading about that and the implications of two AIs against each other in Poker, Chess, and much more - https://crackjs.com/will-ai-will-replace-engineers-part2. In the same document, I added two ways, namely Re-Inforced Learning & Imitation Learning, which an AI model uses to learn from data and detect patterns while taking feedback. You could check that too.

An Introduction to Existential Risk & AI Security

A few months ago, I watched a podcast of Ali Abdaal with Liv Boeree, a world-champion poker player. She spoke about the existential risk for humans, and I wanted to cite that here. I'm not going in-depth as my book will contain dense details on AI security, but here's a snippet of a section from that podcast that explains existential risk or the chances of it.

She says that we are spending a lot of money on making significant progress on AI, but we're not tracking the safety of others. The ratio of the money spent on AI VS the money spurged to make it safer is 500:1. Liv still considers AI unsafe. Even Liv agrees that we need stability since AI is disrupting multiple industries.

It is a matter of keeping the negative externalities to a minimum when everyone is optimizing for speed to release their AGI software before the other company.

She talks about Longview Philanthropy, where the main focus is an existential risk, which means reducing and managing these new growing threats like climate change and environmental damage through emerging technologies like AI.

She points out that there is a lot of uncertainty. For the people who chant, "We're good for at least 20 years", Liv says that their head is in the sand. On the other hand, people who say, "We're going to die in 5 to 10 years", says those people are overconfident. She says we need to manage these threats and stay prepared for them.

Irrespective of whether covid got planned, injected, or made in a lab, the virus leaked anyway. We need to prepare for such pandemics since labs are coming up with more and more solutions, and if any of them gets released to the general people, it will be a problem. Most of them get leaked, and the results could negatively hurt people. For those scenarios, we must use the Longview Philosophy to prepare ourselves.

You can relate the scenario of viruses getting leaked to the general public to AI models going out of control, which will do more harm than a few years of the pandemic. Nevertheless, I believe I spoke a lot about problems. Now, it's time for solutions.

What are the solutions, and how to sustain yourself during this period of the AI era?

A recent survey by Wiktor with 163 people shows that 40% of them think AI won't replace contemporary developers, 34% believe it could take time (after 2031 essentially), 21% voted that it will take five to ten years from now, and other 4% said it takes less than five years. But is that the accurate truth?

Problem Solving > Coding

Let me be transparent. The engineers, which don't contribute enough or considerable value to their company and perform repetitive tasks, will get replaced by AI. Rather than worrying about whether you will get replaced, you should focus on what's in your control and begin to contribute as much as possible.

In these situations, the developers with extensive experience stand out in the 1000s of developers crowd because AI can supply the knowledge. However, it cannot provide the unshared lessons learned through experience.

For now, AI tools, such as ChatGPT, can only learn from the data through the Internet. It cannot learn by itself, and it is a Weak AI. You still have a chance to prove your value.

As I argue in my book CrackJS, the field of Programming or Software Development stands beyond merely coding applications, and most people supporting AI tend to forget this because they never got to experience it first-hand, but that's fine because we're here for a reason.

I believe that people who rote or memorize coding solutions will get replaced first when AI becomes more advanced because they are not problem-solvers. They are coders.

Not to mention, AI learns everything by heart and then gives solutions. It is not coding by itself. If you think about it, rote learners got replaced already when ChatGPT was initially released in 2021, as it can memorize better than them.

The people adopting problem-solving with tools like generative AI at their disposal will get a competitive edge and lead the world.

You must understand that Software development isn't only about writing code. It is about solving problems, understanding business requirements, and forming strategies while prioritizing users.
For the same reason, AI lacks liability and only performs the given task to get the outcome we ask for, not what we want.

Developers can interact with the users, understand their requirements, adapt to changing desires, and solve problems. But these AI systems cannot do that yet, and AGI hasn't set the grounds yet.

AI will help us quickly navigate problems, debug them, and continue solving other massive problems. It will boost our productivity by reducing the time each task requires. As long as there are problems to solve, there will be a demand for problem-solvers with innovative solutions.

Media claims AI will replace humans, but ML engineers are not sure if text generation bots are "AI" advancements in the first place. I've spoken to tons of data scientists and engineers in the AI field, and they stated that AI is simply a fancy word for all these advancements. We're not much closer and must go a long way to achieve that outcome.

Instead, they said the world still requires creativity and problem-solving skills to resolve complex problems instead of a model that generates or memorizes text based on the data from the Internet.

Anyone can generate code. Either you do it, or an ML model generates it. It's similar to the situation wherein it doesn't matter whether a nail gets hammered in with a hammer or nail gun.
Instead, a nail gun saves more time, like AI in software development. In the way, a nail gun cannot make a house, and an ML model cannot create a full-fledged complex solution or software.

Even if no-code tools try to replace developers, someone from another end must act as the developer and write code in one way or the other. Every no-code website merely places an abstraction layer on top of the code-writing phase with a friendly user interface for newbies to make websites and call it a day. But eventually, engineers work on making that site possible.

Software engineering requires negotiation and persuasion skills to create grand slam offers, develop or maintain empathy, and convince shareholders and managers. AI cannot adopt these skills. It doesn't have emotions, awareness of, or persuasion skills in the first place unless a human teaches it how to do it, and the models remain like a copy.

If you're not a problem solver who thinks beyond coding, you're not fit for this.

Easier for us

Programming has always been effortless for us. For programmers today, only logic and writing actual code seems out of the box, except for the design work of engineers. However, we began with punch cards from the 1800s, assembly language (It is a real pain, trust me), C, and then with incremental jumps to modern technologies, such as JS or AI.

The point of modern Programming is to hide the hard part of it, which is to design the architecture, develop intricate algorithms, etc. It is because developers do more than write code.

Most consider that writing code is the more fun part of Programming than anything else. The company that submits the feature thinks about getting or implementing the feature, like all newbie and no-code folks nowadays.

However, Programmers decide how to implement it with a set of systems, algorithms, integrating with the current codespace, refactoring, considering scalability as a factor, and so on.

Beyond Coding

Most of the time, even the customer doesn't know what they want, and here's where an AI system fails. Humans are better at understanding those sentiments or human goals, and a machine doesn't understand that. It could, but the actions taken for the outcome of a misunderstood assignment could lead to a disaster.

Only credible and experienced seasoned developers get an opportunity to interact with customers, build strategies, and interact with the environment to test and make corrections. Thus, your goal should be to become one of these developers to avoid getting replaced easily.

You need to dig information out of the customers and think out of the box. It becomes difficult for customers to directly explain that to AI-based software or tech companies trying to adopt AI as a replacement.

While humans can think creatively and critically alongside significant liability while understanding the consequences of deleting an important DB for a new upgrade, AI of this era still lacks in this area.

It lacks in understanding the problem appropriately and analyzing it. AI restricts itself because it operates based on the work created by humans above all. We can collaborate, share ideas, and brainstorm with various approaches, but AI cannot do that.

AI cannot replicate human creativity, imagination, and the ability to understand the nuances of human behaviour or human/consumer psychology.

The Nature of Programming

Coding is a field that has a nature which continuously incorporates automation. Consider the libraries and frameworks we get to automate certain parts of our programs. You don't need to write mathematical equations to host unique components on your site. You already have a solution for that or everything.
LLMs help write code, but they help developers, not business owners or consumers directly.

For instance, no-code platforms have been around for a while, trying to optimize for specific solutions, and clients still prefer reaching out to a software engineer for a perfect customized solution. The skill never goes away. Instead, it gets particular parts of it automated.

Seasoned developers tend to code less than they initially did during their junior years. We join management, software design, architecture, or other massive roles. It is because we gain experience around the process that requires us to make informed or tense decisions to solve specific problems.

The role of Programmers may shift as technology gets better every day, but they will remain a crucial part of the technological future. It depends on the new advancements to which extent Programmers will get replaced.

After all this, who will win this situation? Who will come up on the top? Well, continuous learners.

Who will win this war?

The people who embrace Artificial Intelligence with open arms remain at the forefront of this war (sort of battle). When such advancements in the industry occur, we expect the respective fields to grow, and the people who deviate from such changes are the first ones replaced.

I'm not saying that you should embrace it and become a robot yourself, but begin to use it and understand the actual purpose of this.

While generative AI provides valuable tools to boost productivity, it doesn't affect the long-term viability of software development as a career. In technical terms, L6, L7, and L8+ engineers or folks in a company will survive if they let AI stay alive because they are the ones who made it.

During this time, people could learn machine learning and prepare for the future. Experience and skills are the two crucial factors differentiating mundane workers that will get laid off between enduring knowledge workers and those who will experience salary hikes. The more you learn, the more chances you have to stay ahead of AI.

Communication is a big part of software engineering projects. Without it, we're nothing but fools. It is a complex field that requires constant or continuous learning, reasoning for approaches, and thinking or problem-solving skills.

Wherein AI holds none of these skills. With communication comes a collaboration with customers to understand their POV, journey, and reason behind creating software alongside the joy of working with other software engineers to brainstorm, fix issues, conduct reviews, etc.

Programmers or good developers keep learning new skills unlike professionals in other fields. There are only quite a few fields where continuous upscaling is a skill. It is a skill that we mastered and continue to learn.

If an AI system comes up to ruin our career, we will switch, learn how to join AI, and eventually make a position for ourselves. Only developers with that shiny skill and unloosing attitude can win this game. Take calculators as a prime example of such automation in physical products.

You could be in danger if you don't adapt and cry about it because AI keeps evolving at a breakneck pace. I faced stress for quite a while about this concern of AI taking over us. However, the more I researched, I realized half of the claims are like guns with an empty magazine of bullets.

Once AI evolves to complete more complex tasks, it eliminates a significant layer of engineers, which forces high-level engineers to think more about other complex problems, and this loop continues.

You need to start experimenting with it and become creative to solve higher-level problems than building static websites for companies, as I did because I accepted AI will be able to do that better than me soon.

Therefore, I adapt to these changes. I enter new spaces in software development like I have been dabbling by making an AI model myself, and I upscale by some means.

If you can quickly learn something new that pays your bills while keeping you happy, you're in for a great career. Don't fear change. Instead, accept that change and always look for ways you can better exploit that change for your benefit. However, before I conclude, I must touch upon another crucial topic.

Skipping Experience

Even though I said that only seasoned developers can make a living in this era, you cannot skip stages of software engineering to build a career.

A beginner cannot jump to advanced concepts because advanced concepts stand on the shoulders of basic ones. Although you're not obliged, I wouldn't suggest leaping because you will smash your head into the ground when you fall and fail to implement complex solutions.

Every intricate solution, even the ones generated by AI, requires a deep understanding of the basics.

A different solution is to learn the fundamentals of a language or field. If you can master the basics, you can win anywhere. If you cannot follow deliberate learning curves, you will fail miserably. Instead, you can master the basics, learn complex concepts based on those basic concepts, and create complex solutions easily.

If AI understood the code it was spitting out, it wouldn't make mistakes. Even software engineers write wrong code because most don't know what the code does or the logic behind the algorithm.

It usually happens when your basics aren't strong, like in Maths, trying to prove algorithms, you tend to make silly mistakes.
You must begin from the bottom and experience the journey from a beginner to a software engineer in the most efficient way possible with AI to become a high-level engineer.

You only gain experience and intuition by performing the tasks of a beginner and starting out with projects that low-level engineers will perform to become better engineers. Push yourself.

Instead of being scared of it, developers should adapt to the benefits of it and use it to learn new skills or concepts in Programming. Instead of surfing the Internet, you can ask GPT for the answers or test cases. The entire process becomes easier to test different situations with AI. It generates the answer to the "What if" questions.

What if this piece finally concludes with a summary? Well, if you wish.

Summary

Understand the fundamentals adequately and learn the skill to upscale yourself immediately when new advancements arise. You don't want to be the person who keeps sobbing about the latest inventions and avoids upscaling themselves to become eligible for new jobs.

AI won't replace high-level L4+ engineers, but that doesn't mean you stop learning if you're below L4. You must quickly learn new skills and speed up your learning process to catch up. Even college students can simultaneously take related courses to learn new skills or upscale themselves in their respective fields.

You need experience, knowledge of the fundamentals, and the craving to learn something new if things go south.

Let's stop questioning career choices at every small step to a prominent future. Innovations take place often. But if we keep doubting whether we could get replaced instead of taking necessary actions, we'll be the first to get displaced.

Instead of asking whether it would eliminate us, the question should be - "How can I use this unfathomable technology to upscale myself?"

Engineers don't only write code, but they solve real-world problems overlooking complex systems. A set of historical events prove that these phases of innovations are standard in the tech industry. AI will automate some parts of software engineering to make it more efficient.

But it will not replace humans anytime soon. It will complement the work of Programmers instead of doing it independently.

You have an opportunity since AI lowers the entry barrier for newcomers to the Programming space. If your foundation is strong, you can use AI as an assistant to create complex solutions. You must leverage that option and learn new AI skills, such as prompting or nudging AI in the right direction to get the best solution possible.

There's a lot of content on the Internet. Your job is to filter it, not find more.

We still require engineers to fine-tune the code, eradicate errors or bugs, and ship the final result into production. The software doesn't get built on a conveyor belt. It isn't manual data entry, either. It takes creativity and different approaches to test whether an algorithm or system would work and create designs that assist customers the most.

Take the words of the CEO of OpenAI, Sam Altman. He said, "Currently, GPT-like models are far from being able to replace the creative genius involved in great programming. While they could automate many other programming tasks, programming jobs involving creativity and problem-solving skills that will not get replaced by GPT models anytime soon."

You can explore these AI tools to make the best use of them.
Clients who require custom software, and are not software engineers with sufficient knowledge to create a full-fledged solution for a problem, will need software engineers.

They can use AI, but they don't know the implementation process and create software out of the code thrown at them by the AI model. They require fundamental knowledge to place code blocks in a code structure.

AI could guide them to the solution, but would major clients spend their time doing it themselves or delegate it to someone else? That's where you come in. You will turn their vague idea into a proper solution using tools like AI to speed up the process and save more time for them than intended.

Writing code will become more fun, like Google Docs made it easier to write books, articles, etc., as Sundar Pichai stated in April 2023.

We must balance AI and human programming capabilities to get the best of both worlds instead of fighting to check which side will win. Until the development of strong AI, or AGI, the top-tier developers who focus on complex problems will not get replaced. Strong AI cannot get predicted at this point.

At this stage, AI is merely showing its potential. It's neither flawless nor stupid. It's simply a demonstration of the possibilities. Many people will get displaced, but humans won't stop working, new jobs will get created, and we'll find a way to create new opportunities, work in other industries, or supervise advanced systems.

Fewer people will actively work on it, and others will oversee it at different intervals. Programmers' roles may change, but there will always be needed to code, test, and integrate machines.
AI is still a program incapable of correcting itself in any way without the input of humans.

When we raise claims like these, we strong-arm potential problem solvers from creating and launching their innovation, or even students learning computer science (or something similar) in Universities & Colleges.

AI will definitely replace rote learners, or it already has, but it cannot replace problem solvers (not so soon). Essentially, coding is an approach to solving problems digitally. It is not the only step involved in the problem-solving process. Creators without computer science knowledge hyped this up, but we cannot blame them either.

Ultimately, automated systems require human input. Human supervision to avoid mishaps, particularly those concerning ones with AI security and ethics. We must look deeper into the unethical capabilities of AI.

Note that AI is not like the robot in sci-fi movies. Those movies do not represent real-world models. Security is a concern, but not to the extent cited in films.

The first rule is to learn the skill to be a continuous learner and adapt to new changes. Also, whenever you read articles, try to fact-check them before making crucial decisions or forming an opinion/conclusion. We cannot get demotivated due to these advancements.

Nonetheless, I hope this piece helped you form your opinion, get insights into the current situation, and decrease your anxiety. I desire to get an answer from you as well. Irrespective of whether you're a programmer, medical student, or FinTech CEO, here's the form - https://afankhan.com/humans-and-ai.

I gave you multiple scenarios and angles to look at this situation. The rest depends on you now. With my answer, you can form yours.


By the way, I am writing a book called CrackJS, which is based on and covers the fundamentals of JavaScript. If you want to learn more about JS, follow me on this journey of writing this book with these articles trying to help developers worldwide.

If you want to contribute, comment with your opinion and if I should change anything. I am also available via E-mail at hello@afankhan.com.

The notion document containing the resources/sources used to write this piece with drafts and an extra section - https://crackjs.com/will-ai-will-replace-engineers-archive.

Top comments (1)

Collapse
 
xanderrubio profile image
Alexander Clemens

I will be taking my time with this article. 🫡