The AI Truth: Are we overestimating the AI?

why super intelligence is decades away.

Recently, I have seen a lot of articles and news on how AI can replace jobs in different industries in less than a couple of years from now. But It really started bothering me when people say It can replace programmers and developers.

Being a programmer and an AI researcher, I think this is blatantly absurd and here’s why.

1/ Let’s talk about the source of these kind of news and information. These so called “AI Experts” know nothing about machine learning. Most of them have even don’t have a degree in maths and sciences.

I know that a degree is not really important now a days, but they have 0 experience in programming, never have built any machine learning model and know nothing about these machines work or the history of advances in these kinds of technologies and what led to that.

and just become someone runs a company which uses AI models, doesn’t make them AI experts. A few very good people have developed these models made the source code open source. They have also put efforts to document using the model in an easily understandable fashion.

I am not say anyone can run these companies, I am just saying that this doesn’t necessarily make them an expert in AI.

If I were you I would be careful listening to anyone who claims to be an “AI guru”. I would look at their background in linkedin, any repos in github, any research papers published and even then listen to them with a grain of salt because to make these large conclusions, they must have years of experience with machine learning and need to understand how these machines learn, err and evolve.

Here a few people that I believe have the right to talk about AI.

2/ Just because a model is trained with billions of parameters and on large dataset doesn’t make it smarter than humans, or even close to human intelligence because the loss function for these models is still categorical cross entropy(CCE).

It would be a shame if CCE is anywhere close to human loss functions.

I think we underestimate how powerful our brain.

We don’t just have general intelligence. Our intelligence changes based on our environment. Someone very good at video games may not be the same at something else.

We don’t have a common threshold for intelligence across all our environments. We can many mundane tasks easily and with some practice, we can be masters even at specific skills.

Researchers predict that there is computation going even inside our neurons. Just because they are in similiar shape and do the same broader function, doesn’t mean human neural networks and machine neural networks are same. If you ask me, they are not even close.

Another difference is reproduction. Even if you give chatGpt hands it won’t be able to replicate itself. It can’t even write basic programming, how do you expect it to build a LLM.

Here’s a simple example. A person with a disablity(mental or physical), maybe even with one eye, and can drive a car well enough. Even if he has never got into a car and has only trained for a couple of weeks. The latest self driving models we have, which are trained on millions on miles of driving data and later trained for years with reinforcement learning approach are not able to drive like a human yet.

We clearly underestimate how powerful our brains even if some of us are so stupid.

3/ This is Gartner hype cycle, and it’s and industry standard to view how an emerging technology with evolve over time.

According to the graph, We are the peak of inflated expectations currently and will soon have a deep through of disillusionments where we don’t see any major advancement in AI.

And we have already seen that with NFT and bitcoin. An overwhelming hype where celebrities bought these apes for millions of dollars and a deep trough. I think we are still in the trough.

But the good thing is they slowly come back. just like internet.

and even if we had, we don’t know how many breakthroughs away we are from AGI. To conclude, we are too early to speculate about AGI.

4/ have used these AI programming tools, like chatGpt, bard, co-pilot and the code they produce is laughable.

Yes, for basic programming, in my case for utility functions like reading a json file, renaming directories it does a decent job. It produced correct code but most of the times it is not efficient(time, space).

And for any slightly complex code, even for leetcode examples, It either gives code with bugs and I had to spend hours to debug it or It hallucinates and invokes never seen predefined functions

Overall, I think it’ safe to say that I rather write my shitty code and spend some time on it instead get a horrendous code from it and spend my whole day debugging it.

Overall, I don’t think programmers need to worry about AI taking their jobs rather use it to build something that can automate mundane tasks that can make life easier.

That being said, If you are a basic programmer and don’t work on honing your skills and learning new technologies, AI will beat you.So, never stop learning.

Reply

or to participate.