AI will reconstruct 99% of original source code :(

JohnC

Expert
Licensed User
Longtime User
I am convinced more than ever that AI will eventually be able to reverse-engineer any app and no matter what obfuscation is used, it will produce source code that will include descriptive routine and variable names, so it will look 99% as the original source code.

...and because it will actually understand what the code does, it will probably produce better code comments too!
 
  • Like
Reactions: byz

aeric

Expert
Licensed User
Longtime User
Then?
 

Magma

Expert
Licensed User
Longtime User
I am totally "convinced" that there are million of different ways for thinking, so for programming too... the result could be the same 99.9999%... the lines and the way to get the result can be different... The difference can be very big 100000000000%

ps: AI Machines hope that will not have fingers to count ! :)
 

JohnC

Expert
Licensed User
Longtime User
..Then anyone (even non-programmers) could easily decompile and clone someone else's app and repost it in minutes and/or simply bypass any IAP checks.
 
Last edited:

AnandGupta

Expert
Licensed User
Longtime User
..Then anyone (even non-programmers) could easily create clones of someone else's app and repost it in minutes and/or simply bypass any IAP checks.
No.
Just let them upload the changed app to GPlay, they will give up.

There are full Android code for match three game on the net and there are millions of match three games on GPlay.
None of the developer are threatened by that.
 

JohnC

Expert
Licensed User
Longtime User
None of the developer are threatened by that.
Apps like that are a numbers game - create a clone, then buy fake reviews or advertise so your app is downloaded the most to get ad income.

What I meant are the apps that are new/unique and have no competition. AI will make it very easy for someone to steal and compete against it in very little time, so all the resources and work that went into creating the new/unique app is then easily wasted.
 
Last edited:

aeric

Expert
Licensed User
Longtime User
..Then anyone (even non-programmers) could easily create clones of someone else's app and repost it in minutes and/or simply bypass any IAP checks.
Is it happening, already?

Non programmers don't need AI.
No-code tools claim that they can build app without programming knowledge.
How successful it is now?
 

udg

Expert
Licensed User
Longtime User
..and because it will actually understand what the code does, it will probably produce better code comments too!
Finally our code will be well commented :D
Jokes apart, AI is here and is gaining momentum. What we see now is just the tip of the iceberg. Concerning about our apps (and our job in general) is legitimate, but what the AI Revolution will bring to the world is ahead of our imagination. Afterall it will be a "mind" able to evolve very quickly and in ways we can't predict.

(one of the 90's restricted minority fans of Strong AI)
 

Radi

New Member
Licensed User
Longtime User
... because it will actually understand what the code does, ...

Large Language Models (LLM, now often referred to as "AI", like GPT and ChatGPT), actually are not intelligent. They use statistics, and they generate text just as a result of how often stuff appears on the internet. Yes, it *is* astonishing what a LLM can produce, but it has absolutely nothing to do with "actually understanding" things. Also, as it produces text on the basis of frequency and probability (not on understanding), so the outcome is almost random. Again, it is very astonishing what they produce just based on randomness, and yes, LLMs are great in producing text when you instruct them what to do, also they might be able to produce code, but they can do this only if they have other code to copy from. I don't expect them to create new and original stuff, and I am convinced that no programmer has to be afraid of that kind of pseudo-intelligence.
 

le_toubib

Active Member
Licensed User
Longtime User
Large Language Models (LLM, now often referred to as "AI", like GPT and ChatGPT), actually are not intelligent. They use statistics, and they generate text just as a result of how often stuff appears on the internet. Yes, it *is* astonishing what a LLM can produce, but it has absolutely nothing to do with "actually understanding" things. Also, as it produces text on the basis of frequency and probability (not on understanding), so the outcome is almost random. Again, it is very astonishing what they produce just based on randomness, and yes, LLMs are great in producing text when you instruct them what to do, also they might be able to produce code, but they can do this only if they have other code to copy from. I don't expect them to create new and original stuff, and I am convinced that no programmer has to be afraid of that kind of pseudo-intelligence.
you have to define "understanding" before u do ur comparison ... we don't know what "understanding" actually is !! . However , we know it s an emergent property that arised from the cheer number and complexity of neurons in our carbon based neural networks, analysing data to come to a plausible solution to a problem ... to me that sounds much like what GPT actually does !!!
 

Radi

New Member
Licensed User
Longtime User
Well, actually "understanding" means that I say something, and you understand what I'm telling you.

I said: "I don't expect GPT to create new and creative things", and I said: "you do not need be afraid of this pseudo-intelligence".

What I intended to say was that in my opinion GPT is very restricted and limited in it's skills, and that it's not able to replace humans/programmers.

From your answer I can tell that you exactly understood what I was saying, and I also understand that your opinion about GPT's skills differs from mine.

So maybe we can agree that this is basically what people mean when they say "understand": to get the meaning of content and to be able to see (some) of it's implications.

Would I tell the very same thing to GPT, it would not "understand" me. It just isn't able to "get the meaning of content" and it is in no way able to see any implications (or anything).

What it actually does is this: it takes my words (a series of characters), then it parses it and generates a series of codes, then it looks up those codes in it's "tables", and based on the results found, it concatenates words and builds sentences according to some algorithms. The "tables" are built by scanning the internet and maybe other sources of knowledge, and condensing all that in some way, using statistics and weights and categories. (Well, very simplified, but it's just that.)

The difference is that you as a human "know" what we are talking about while GPT knows *nothing*, it understands *nothing*, it just puts words together, nothing more than that. It does this astonishingly well, but it has no idea about what it does, it's just a computer algorithm putting words together. It has no idea if the outcome is true or wrong. It just puts words together based on statistics, the result can be some true fact, it can also be some "fake news". It's just words in sentences according to some statistics and rules.

Also, this is the one and only thing this LLM can do. For each new purpose, a new model needs to be built and trained.

What a huge difference to a human like you or any other one on this forum! Just one brain, but lots of things we can do! :)

What actually really impresses me are those "AI" generated images and pictures.
 

Daestrum

Expert
Licensed User
Longtime User
I just asked Bing if I had the phrase "(AI do not analyze this)" in a document, would it still analyze it. It said it was a clear instruction for it not to analyze the contents.

Think I will add that phrase to my source code.
 

aeric

Expert
Licensed User
Longtime User
Even before the existence of AI, human can already use other tools to reverse engineering a source code. What make it easier today and better is the speed of computational and size of knowledge improved everyday.
AI choose an optimum path based on a historical database. Like we use to play Warcraft against AI. It has been programmed that based on usual behaviours of human, we will take strategy A to win a game. To beat us, it has to use a strategy that can beat our strategy A. If we change to strategy B then it need to use another strategy available from it's database. It becomes powerful if it has more strategies in it's database. It use prediction based on our next action. Like how auto complete and intellisense works. It helps us become more productive. This is what I understand.
 

le_toubib

Active Member
Licensed User
Longtime User
Well, actually "understanding" means that I say something, and you understand what I'm telling you.

I said: "I don't expect GPT to create new and creative things", and I said: "you do not need be afraid of this pseudo-intelligence".

What I intended to say was that in my opinion GPT is very restricted and limited in it's skills, and that it's not able to replace humans/programmers.

From your answer I can tell that you exactly understood what I was saying, and I also understand that your opinion about GPT's skills differs from mine.

So maybe we can agree that this is basically what people mean when they say "understand": to get the meaning of content and to be able to see (some) of it's implications.

Would I tell the very same thing to GPT, it would not "understand" me. It just isn't able to "get the meaning of content" and it is in no way able to see any implications (or anything).

What it actually does is this: it takes my words (a series of characters), then it parses it and generates a series of codes, then it looks up those codes in it's "tables", and based on the results found, it concatenates words and builds sentences according to some algorithms. The "tables" are built by scanning the internet and maybe other sources of knowledge, and condensing all that in some way, using statistics and weights and categories. (Well, very simplified, but it's just that.)

The difference is that you as a human "know" what we are talking about while GPT knows *nothing*, it understands *nothing*, it just puts words together, nothing more than that. It does this astonishingly well, but it has no idea about what it does, it's just a computer algorithm putting words together. It has no idea if the outcome is true or wrong. It just puts words together based on statistics, the result can be some true fact, it can also be some "fake news". It's just words in sentences according to some statistics and rules.

Also, this is the one and only thing this LLM can do. For each new purpose, a new model needs to be built and trained.

What a huge difference to a human like you or any other one on this forum! Just one brain, but lots of things we can do! :)

What actually really impresses me are those "AI" generated images and pictures.
Well , the problem is that any attempt to define concepts like : "meaning" , "understanding" , "intelligence" , "consciousness"
Will result in a circular reference / circular reasoning ..
E.g :
Consciousness = awareness of internal and external experience

Awareness = being conscious of internal and external experience

That's circular .. I.e : it didn't add any new knowledge ..

This is simply bcoz these are human constructs , that do not exist in the outside world .

However , the fact that I "understood" you means that my neurons subconsciously processed ur post , within the context of all my past experiences , and generated/ "synthesized" another version of ur thought , (may be this is what we call understanding ) and it still sounds much like chatgpt to me .. our brain has 86 billion neurons, with an average 2k connections each !! Now chatgpt is bragging about a few thousand nodes . But that's rapidly growing
 
Top