But someone have to create the app firstany app
..Then anyone (even non-programmers) could easily decompile and clone someone else's app and repost it in minutes and/or simply bypass any IAP checks.Then?
No...Then anyone (even non-programmers) could easily create clones of someone else's app and repost it in minutes and/or simply bypass any IAP checks.
Apps like that are a numbers game - create a clone, then buy fake reviews or advertise so your app is downloaded the most to get ad income.None of the developer are threatened by that.
Is it happening, already?..Then anyone (even non-programmers) could easily create clones of someone else's app and repost it in minutes and/or simply bypass any IAP checks.
Finally our code will be well commented..and because it will actually understand what the code does, it will probably produce better code comments too!
... because it will actually understand what the code does, ...
you have to define "understanding" before u do ur comparison ... we don't know what "understanding" actually is !! . However , we know it s an emergent property that arised from the cheer number and complexity of neurons in our carbon based neural networks, analysing data to come to a plausible solution to a problem ... to me that sounds much like what GPT actually does !!!Large Language Models (LLM, now often referred to as "AI", like GPT and ChatGPT), actually are not intelligent. They use statistics, and they generate text just as a result of how often stuff appears on the internet. Yes, it *is* astonishing what a LLM can produce, but it has absolutely nothing to do with "actually understanding" things. Also, as it produces text on the basis of frequency and probability (not on understanding), so the outcome is almost random. Again, it is very astonishing what they produce just based on randomness, and yes, LLMs are great in producing text when you instruct them what to do, also they might be able to produce code, but they can do this only if they have other code to copy from. I don't expect them to create new and original stuff, and I am convinced that no programmer has to be afraid of that kind of pseudo-intelligence.
Well , the problem is that any attempt to define concepts like : "meaning" , "understanding" , "intelligence" , "consciousness"Well, actually "understanding" means that I say something, and you understand what I'm telling you.
I said: "I don't expect GPT to create new and creative things", and I said: "you do not need be afraid of this pseudo-intelligence".
What I intended to say was that in my opinion GPT is very restricted and limited in it's skills, and that it's not able to replace humans/programmers.
From your answer I can tell that you exactly understood what I was saying, and I also understand that your opinion about GPT's skills differs from mine.
So maybe we can agree that this is basically what people mean when they say "understand": to get the meaning of content and to be able to see (some) of it's implications.
Would I tell the very same thing to GPT, it would not "understand" me. It just isn't able to "get the meaning of content" and it is in no way able to see any implications (or anything).
What it actually does is this: it takes my words (a series of characters), then it parses it and generates a series of codes, then it looks up those codes in it's "tables", and based on the results found, it concatenates words and builds sentences according to some algorithms. The "tables" are built by scanning the internet and maybe other sources of knowledge, and condensing all that in some way, using statistics and weights and categories. (Well, very simplified, but it's just that.)
The difference is that you as a human "know" what we are talking about while GPT knows *nothing*, it understands *nothing*, it just puts words together, nothing more than that. It does this astonishingly well, but it has no idea about what it does, it's just a computer algorithm putting words together. It has no idea if the outcome is true or wrong. It just puts words together based on statistics, the result can be some true fact, it can also be some "fake news". It's just words in sentences according to some statistics and rules.
Also, this is the one and only thing this LLM can do. For each new purpose, a new model needs to be built and trained.
What a huge difference to a human like you or any other one on this forum! Just one brain, but lots of things we can do!
What actually really impresses me are those "AI" generated images and pictures.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?