hckrnws
What enabled us to create AI is the thing it has the power to erase
by delaugust
I totally agree with that feeling.
I have been trying to use AI for coding. Sometimes it is very bad, but I got to learn when it is surprisingly good.
Many times, the AI can show me some code that doesn't work, but that I can simply tune and make work. Many times the resulting code is good: it's not like some copy-pasting by someone who doesn't understand it. In a PR, nobody would think it's an AI, it could have been written by me. It was, in a way.
But intellectually, I did much less. I did not have to imagine what kind of API may exist that solves the problem, or read the documentation about those candidate APIs. I did not have to build the code from the ground up, maybe starting with a simple case, then adding a loop, then going for tail recursion, and back to a loop, etc.
Not doing that admittedly made me more productive. But it really feels like I didn't improve my skills by doing it. I didn't get to discover other parts of the API that I don't need right now but that may be useful later. I didn't get to build my intuition of whether a loop or tail recursion is better here. I just tuned stuff. What happens when I've been doing this for year and now need to debug that code? Will I have the intuition needed to find where it may be failing? Not clear to me.
It feels similar to learning a language: I may understand a lot, and if people talk to me slowly and offer me different alternatives, I may be able to say "yes, the second one!". But by doing that I'm not improving my talking in this language. The only way to get fluent, I feel, is to make this effort of constructing sentences, looking for the words, finding ways or working around words I don't know, etc. If I don't do that, I will never be fluent.
AI makes me more productive now, but may prevent me from becoming fluent at all.
>But it really feels like I didn't improve my skills by doing it.
I dunno hey.
I used ChatGPT to teach me python. I had a similar experience, except I started with "This is a project to teach me python" and now I feel comfortable enough that I barely use ChatGPT, except for when I get to a sticky problem, and need something to bounce ideas off. In fact I get pissed off now with the amount of time I spend correcting ChatGPT in larger projects that I never bother to show it the whole code anymore.
However I was working on plugging an arduino into my nonsense, and I did have it almost completely write the C code for that end of the project. But I am going to upgrade my Arduino shit this year and will be approaching it the same way, this time you teach me the arduino mister GPT.
> I used ChatGPT to teach me python.
I can imagine that if you don't know how to program, ChatGPT will be a nice teacher. But I think that there is a difference between learning how to program and using AI as an experienced developer.
I have been coding for almost 25 years. When ChatGPT gives me a snippet in some language, I don't really need to read every word carefully before I get an intuition about whether or not it makes sense. When it gives me a good snippet, I honestly don't really have to read it entirely: I can just skim through it and tune it. I am not making the effort of programming, I am rather making the effort of reviewing. And as we all know it's easy to make a "lazy review".
I think there is a difference between your situation and mine. It's great if ChatGPT helps you learn, but in my case I really think it makes me more productive by preventing me from improving myself, because improving takes time. Somehow ChatGPT is optimising away my learning time, if that makes sense.
Perhaps.
In my role I wear a lot of hats, and abstracting away part of 1 hat while using it to help me develop my skills makes sense.
But if you only wear that one hat, you are getting better at managing ChatGPT, rather than getting better at coding, because thats the most efficient output.
Your experience mirrors my OpenAI experience. You would be surprised at how well composer mode works on cursor.
Totally different usage than what you're quoting.
How do you know the bot taught you the correct ("pythonic") way of doing things versus learning by yourself reading the docs?
Very good question. I dont. Perhaps I am pythoning wrong.
I share your sentiment about the experience developing with AI.
But I've been playing with the idea that the concerns and gaps you've listed, can just be explicitly communicated with the AI.
If you're concerned about weighing options, just tell the AI that exactly and pair with the thought process iteratively.
For all things AI, one of the most useful things I say is "ok give me the best practice for X" and then "ok what about the contrarian view. show me that. what are the downsides, what about stuff we don't know we don't know"
I still believe it's not the same, but even if it was: actively trying to improve yourself this way makes you less productive.
We are going towards a world were it is not acceptable to be less productive. You will be told to do that in your free time, here you need to produce as much as your coworkers.
AI changes programming into assembly line work. It's more productive, but it loses 100% of what makes programming interesting to me.
That is a good point, in the same way the lure of $100k+ salaries inevitably attracted the masses, AI means "everyone can code!"
I do share your lament. "Being productive" is a capitalist construct. There's a place for it. But I got into coding by messing with CSS and when the text turned red on my screen by something I did… WOW!!!
I'll never forget that feeling. I make it a point to dabble in side projects on my own terms. Want to always keep that fire lit.
> AI means "everyone can code!"
In the same way as "Dreamweaver" (web IDE) means "everyone can make a webpage" maybe? Doesn't mean they can do it well at all. I made a ton of money in my early days as a web designer gettin' paid to fix up nasty messes of local business websites (cleaning up Dreamweaver tag-soup and such) in my area during those early years when half of everyone with a business "hired" their little teenage cousin "under the table" to build them a website for their business and ended up horribly regretting it.
Time spent on improving prompting pays off, but isn't as satisfying, challenging or interesting as time spent coding.
You're basically just finding a path through the stochastic parrot's numerical maze whereby it will articulate the solution matching what you already have in mind.
Becoming "a parrot's assistant" is an apt way of expressing this frustration.
But it really feels like I didn't improve my skills by doing it.
The process of finding the right prompt to get the most out of the LLM is programming. You aren't doing anything different, you just switched languages.
No, it is not. I can't hear that "prompt engineering" is engineering.
The process is clearly shortened. I don't have to "go through the pain" of producing the code: it just pops up for free, and then I just have to tune it. It's the difference between writing a speech and proof-reading it. Or listening to a lot of music versus actually learning how to play an instrument. Because you can be critical about music does not mean at all that you can play an instrument.
Some of us are old enough to remember when we said the same thing about C compilers. "No thanks, I'll stick to assembly. I have no idea what that thing is actually doing with my code, but I know it sucks."
Okay, but this obviously isn’t the same as that.
C compilers emit assembly as a direct result of the code written.
The same compiler given the same code will output the same assembly. This would be true for LLMs if you set the temperature to 1 and fix the seed, but you lose the benefits of language models at that point.
There are guarantees about the behavior of a compiler which can be verified by examining the code for the compiler.
The same is not true for language models. They are statistically based all the way down.
If you can look at the past to predict the future, you should be a trader, not a developer.
I.e. because it was wrong to say X in the past doesn't mean it's wrong to say Y today, even if you find some similarities. Doesn't mean it's true either.
So let's ignore such worthless arguments and have a real discussion, shall we?
It looks like we're both maintainers of open-source projects. If we had enough space and time, I'd share a support email exchange I just had with a user. I took a look at your other comments, and the one you made about an hour ago hits close to home, because in this case I was arguably the asshole. My summary below is already long-winded enough that most people will skip it, but you should read it.
I'm the author of a package used for data acquisition from various bits and pieces of instrumentation hardware. It is used in some professional settings, but because it's a random collection of hacks, it's mostly useful to hobbyists with more free time than money. Retirees who are getting back into their old ham radio hobby, for example, like the guy I just (failed to) help. He was asking about how to adapt one of the larger programs in the package to a completely-different test instrument.
Although he wasn't a programmer, and although my code is in C++, he had "heard good things about Python" and was wondering if he should install it on his Windows box and give it a try. I didn't give him the full LART treatment, but I was probably a bit more patronizing than I should have been. I said something like, "That's not even the wrong way to do it. Try feeding the .cpp file to ChatGPT and see if it will help you understand what you need to do to modify it."
That was yesterday. This morning, I got an entirely undeserved thank-you note from him. He got a Python program back from ChatGPT, and after a few back-and-forth interactions, it actually worked, using code he couldn't read to communicate with an instrument I'd never heard of.
He wasn't a programmer yesterday, but today he is one. His first language is not assembly or C or BASIC or Python, but English.
This technology doesn't threaten people, it empowers them. It doesn't harm creativity, it enables it. I don't know about "worthless arguments," but my recommendation is that you change your outlook right fucking now if you intend to stay in the software business, or in the music business for that matter. You'll eventually thank me for the advice, and this time I'll deserve the gratitude.
I think that the example you give is similar to another comment around here, where someone said ChatGPT helped them learn Python. I can imagine that it does help them learn, there. And that's great!
But I believe that it is a different situation from what I was describing. For someone who is already a professional, I think that AIs risk to prevent them from improving. The industry will push us towards using the AI and tuning the output because that is more productive now, even if it means that we don't improve our skills.
> This technology doesn't threaten people, it empowers them.
AI threatens many people in many ways. For instance by enabling phishing to a level never seen before. By being a big copyright-laundering machine in favour of BigTech. By teaching people to believe an eloquent, all-knowing chat interface because most of the time it seems good enough to not fact-check. Etc.
> It doesn't harm creativity, it enables it.
Do you know any professional artist? I do. I have an example in the animation movies business. You know what genAI brings to their job? They are pushed to have a genAI generate images, select those that are good enough, photoshop them a bit and move to the next.
This is the definition of "harming creativity". If AI makes you twice as productive, it's okay to lower the quality of what you produce, right? You'll just sell more of lower-quality stuff. That's a tendency that has existed in software in the last decade, but it feels like LLMs will bring that to a whole new level.
> I don't know about "worthless arguments"
My apologies. Re-reading it out of context, it sounds harsh. It's not an excuse, but I wrote it after answering a few other comments that all said "people said that for technology X and they were wrong, which proves that if you say it now for technology Y you are most likely wrong as well".
> hits close to home, because in this case I was arguably the asshole
I wouldn't call that being an asshole. Not helping someone for free because they are stuck using the stuff you provided for free is completely fine.
I have had many people over the years telling me how my project sucked, how it was making them lose money, how they wouldn't use my project unless I added X and Y. When being part of a larger community, I've had people pressure me by telling everybody and their cat, on public channels, that "well you shouldn't use this project because it seems unmaintained" (last commit was some weeks before, issues had been answered days before). Or complaining about the license because it makes it harder for them to make money out of my free work.
Many times people don't even realise that they are assholes. They think that if I published my work for free, they are the customer, and they deserve free support.
Users of open source projects make me want to keep my stuff proprietary. Not because I don't want them to benefit, but just because I want them to stop bullying me.
I hear what you're saying, but I have to say that I think you're conflating bad management practices with the effects of generative AI. The artist's boss sounds like a clown following a bandwagon and creating a toxic work environment at the same time. Hope your friend is able to break free, because if AI isn't what eventually drives them to go postal in that workplace, something else will.
As for copyright, that's a matter for the courts and legislature to decide. If copyright maximalism wins, such that training is not considered fair use, or that liability for infringement attaches to the trainer of the model rather than its users, it will really do a number on American competitiveness. Such a decision would transfer an enormous amount of power to countries that DGAF about copyright law.
The most competitive AI models are always going to rely on snarfing up as much data as possible, legally or otherwise, and I'm OK with that. Copyright is a recent invention, historically speaking; we got along fine without it for thousands of years. What happening in AI, on the other hand, can potentially take us to the next level as a species. If that means the end of copyright as a concept, so be it. My big concern is that like most futures (to paraphrase Gibson), this one won't exactly be equally distributed.
> I think you're conflating bad management practices with the effects of generative AI.
I kindly disagree :-). Management is about increasing profit, not improving the product. And we see the results: arguably most products are worse today than they were 10 years ago. Software is a great example of that: the hardware made huge progress, but the resulting software is slightly worse than it was with hardware orders of magnitudes slower.
> As for copyright, that's a matter for the courts and legislature to decide.
Unfortunately, I think the US are showing us differently. The billionaires decide and again, they optimise their profit.
> What happening in AI, on the other hand, can potentially take us to the next level as a species.
All the established science says that we are about to collapse (most species are, already: we are the cause of the current mass extinction that is the fastest in the history of the planet). We don't have a solution to the energy problem, and we're running out of time. We don't have a solution to the climate problem, and we're running out of time. And we don't have a solution to the biodiversity, that is well into a mass extinction.
There is one thing we know we can do: start doing less with less, adapt our society to handle the coming changes as well as possible and accept that it will be worse than today anyway.
The direction we are taking, though, is what you suggest: hope for a miracle, hope for what we don't know. "Maybe technology will save us". Sure. Maybe god will, too. Feels unlikely to me.
Prompt engineering may be a highly skilled activity or even an art.
But prompt engineering isn't programming because it doesn't allow aggregation. It's not the activity of building a system from reliable components. Prompts don't allow reliable recursion, etc. At best, it is simply a different but valid activity. But AI companies no doubt aim to remove the mystery from prompts in the fashion they aim to remove the mystery from programming.
"Prompt engineering may be a highly skilled activity or even an art."
Yea... That Wall-E ship is our future isn't it?
Reliable. Yes, all pre-2023 code was reliable, but with ChatGPT 3.5, programmers for the first time encountered unreliable code.
No it isn’t. For so many reasons, it isn’t. What a wildly incorrect take.
Here’s the most obvious reason why that isn’t the case:
There is a direct cause effect relationship with the code you write and the resulting program.
That is not so with prompting, obviously.
Prompting is, therefore, not the same as programming.
This very much sounds like you would consider asking a Subway employee for a sandwich to be 'cooking'...
Abstraction is the opposite of precision.
Abstraction is the opposite of specialization, not precision. Multiplication, for example, is an abstraction that can be defined in terms of repeated addition of the same term, which is less general, and so more special, than addition of two arbitrary terms, but it not less precise.
Comment was deleted :(
The use of LLMs for constructive activities (writing, coding, etc.) rapidly produces a profound dependence.
Try turning it off for a day or two, you're hobbled, incapacitated. Anecdotally, ThePrimeagen describes forgetting how to write a for/range loop in Go after using CoPilot for a while (see https://www.youtube.com/watch?v=cQNyYx2fZXw). Funny stuff.
Competition in the workplace forces us down this road to being utterly dependent. Human intellect atrophies through disuse.
In the arts, we see something different. People don't seem to like AI-generated content. For example, I monitor a random sample of recent pornography (random 4-second slideshow from all over the Internet, some scary stuff, see https://rnsaffn.com/zg3/) and AI-generated porn is still very niche, comparable in frequency to small fetishes like elder or latex or rope porn. I expected it to be more popular, but generally humans prefer something else.
As Tim Cook recently said, (paraphrased) once you start using Apple Intelligence to think and work, it's hard to stop.
Those skills that you no longer use will be lost. This is the profitable new addiction, and programmers like us are the first group of people to become hooked.
The response among programmers honestly feels very bimodal. There seems to be the camp that just loves LLMs and have turned to Copilot for everything. Personally, I used it to mock unit tests and that's all I really trust it with. Once I get to a problem I can't easily solve it's something obscure enough that there isn't a lot of web traffic for it and Copilot falls over too, so I find myself rarely using it. Sometimes it's helpful as an intellisense, but a lot of the time it's too slow even for that.
Its value as a text generator or, god help us, a search engine, has also run thin for me. I don't ever use it. I still prefer my google searches and going directly to sites.
> There seems to be the camp that just loves LLMs and have turned to Copilot for everything.
I wonder if that's the same camp that wrote crappy half-ass code before LLMs? There are a lot of people like that.
I actually don't think it is. I know some high performers who are absolutely in love with Copilot, so it's not just the poor devs. I honestly don't know what the difference is. Maybe it's workflow?
> ThePrimeagen describes forgetting how to write a for/range loop in Go after using CoPilot for a while
For me it's normal to temporarily forget the exact syntax of things like loops in languages that I don't use for a while. For example, it's been six months since I wrote python code, I'm sure I'll get lots of basic syntax wrong for a day or two when I next start a python project.
I haven't written anything in C/C++ for several years. I'm sure I've forgotten nearly everything and would need to read a basic book before starting a new project. But, the key thing is that I'd just need to skim the book. All the knowledge is there, it's just latent and needs to be refreshed.
I don't think AI has changed anything here. I bet if this person had moved into a role as a project lead where they were spending all of their time reviewing code of people competent enough to get all the basic syntax right, then they might temporarily forget the exact syntax of a for loop as well. They'd not think much of it, just take the 5 seconds required to refresh their memory and move on, because that is how human memory works. But because AI is a new technology, and therefore scary, when AI is the reason we haven't thought about something for a while and then naturally can't immediately recall it, we get alarmed. But this is just the way human memory works, and has always worked.
Sure, if you're using a powerful tool that allows you to completely change your way of working, then abruptly stop using it, it'll take a while to get your old way of working back. That's normal. There's many potentially scary things about AI and the way it will effect society but I don't think this is one of them.
If you keep a curated cheat sheet / reference sheet for any language you use, it's way faster to reference that than to construct a prompt every time and wait for the bot to answer.
> The use of LLMs for constructive activities (writing, coding, etc.) rapidly produces a profound dependence. > Try turning it off for a day or two, you're hobbled, incapacitated.
Hobbled? Maybe. Incapacitated? No.
Anecdotally, I use Copilot auto-complete and I can still code without it, although some parts (boilerplate) may be a lot more tedious. But other parts (not boilerplate) I barely notice.
I know this because Copilot in IntelliJ has a bug where it signs out, and sometimes I write code for a while (~15 minutes?) without it before I notice.
I will confess that when I have Claude generate all the code, it leaves me incapacitated, because I don’t form a mental model. Also, I don’t think Claude uses enough abstraction, and I know it struggles with uncommon foot-guns, so having it generate a large codebase would probably create something inflexible and with particularly annoying bugs. It can definitely write tests and probably also small black-box modules, but until Claude can write a large project on it's own, I don't think anyone can correctly trust it to write the core components of one.
Maybe by using Copilot, I’m being a bit too comfortable with boilerplate, so I’m not abstracting as much, and creating more annoying bugs, vs. without it. But if there’s a different I’m skeptical it’s a large one.
This reads like someone who doesn't have a solid understanding of the underpinnings at generative AI.
Every major technology progression has come with 'sky is falling' predictions.
Generative AI accelerates the productivity of productive people, and potentially replaces the efforts of button-pusher types adding minimal value or creative input to a process. Not entirely different than personal computers, robots, locomotives, printing presses, and so on.
The examples you cite — PCs, robots, locomotives, printing presses — each caused various clear harms. LLMs will do the same. Waving off these harms as lack of understanding on someone else’s part doesn’t make them go away.
Some of the things you haven’t listed — air conditioning, internal combustion engines, car-centric cities — are busy causing so much harm that our biosphere is showing the strain.
Other things you haven’t listed — government-subsidized sugar production, sedentary lifestyles due to constant PC use — are busy causing so much harm to out bodies that humans are dying of simultaneous over-indulgence and despair.
People warning about these harms were _right_, and in each case they were greeted by similar dismissiveness. No technology guarantees a rosy future. All of it has a cost, some of those costs are very heavy, and some are permanent.
I think you're mixing in too many technologies. I used the examples I did because they were all things that essentially directly offset or replaced human labor.
Air conditioners did not replace humans in jobs, they made indoor spaces more habitable. The internal combustion engine alone did not directly replace people, we didn't really have at that point humans turning cranks all day, the IC engine was a component of bigger machines.
Several of the other examples you mentioned definitely had negative impacts as well as positive ones, but they were never really seen as creating risk to peoples ability to earn money. If anything, they created new industries and jobs.
AI will definitely offset some jobs that are menial or able to be done with low cognitive effort. But historically those kinds of things have always been overtaken by the progression of human invention. Elsewhere in this thread someone mentioned lamplighters as an example. "Jobs" are frequently replaced by progression. "Careers" less so (but not totally immune).
I understand your point about labor replacement. Consider your POV, though: one person’s “menial” is another person’s stretch goal; one person’s “low cognitive effort” is another person’s maximum capacity. What of these people?
As the efficiency of our means of production increases, so too does its ability to leave behind those who are not your so-called “productive people.” Now, we have more than enough wealth in this country to solve that problem to everyone’s benefit, and I’d like to think that we will. However, some of those most directly involved and invested in bringing on the AI-powered future are literally right now attempting to gut the social safety net for the same country.
So, truthfully, I don’t believe a word of it. I think that an AI-powered future is just another ladder that the super-privileged want to climb and then pull up after themselves.
I'm not sure we read the same piece. The title is hyperbolic, they all are these days, but the content is much more measured. And I suspect you might even agree with the second to last paragraph:
> In my own practice, I’m learning to use AI not as a tool of compression, but one of expansion. Prompts become not endpoints but starting points. They create new voids to explore, new territories for my mind to inhabit.
> This reads like someone who doesn't have a solid understanding of the underpinnings at generative AI.
They're talking about the future, though. You are correct that they, nor you or I, have a solid understanding of technology in the future.
Like leaded gasoline and subprime CDOs?
The dangerous interactions were already known, the risk-level was criticized, but there was money to be made...
Eventually we will pull the black ball out of the urn. It is the responsibility of those currently alive to assess the possible harm their creations might cause.
This cynical invincibility complex is becoming a tacit allowance of extreme danger. It's not some annoyance for people to act responsible, and not all technological achievements are the same as ones previously produced.
> Eventually we will pull the black ball out of the urn
What is this reference?
In the 18th and 19th centuries, gentlemen's clubs would vote whether to accept a new member. Each man would secretly place a ball in an urn, either black or white.
If your urn contained even one black ball, your membership was vetoed and your chances - ruined.
I had forgotten the stem of this, thank you!
Seems like it might be this? https://nickbostrom.com/papers/vulnerable.pdf
Comment was deleted :(
I have a solid understanding of AI, and it will destroy the value of most human work, creative and non-creative. The power to do exactly what your job is with the push of a button, for free, in seconds. If it's not already here, it's coming.
Ubiquity kills the value of anything.
Anything short of an actual boycott on AI produced work/art will result in this.
> I have a solid understanding of AI
> and it will destroy the value of most human work
From my view, as someone studying and working with ML and AI since 2017, these two statements seem in conflict.
Either you’re overestimating what AI actually provides, you’re understanding of what “most human work” is skewed by the field you work in, or you’re underestimating the value of human labor.
We’ll just have to wait and see, I suppose.
I guess I’ll just keep working in construction. The arrogance of people who think the value of most human work is done hunched over a computer is staggering.
I concur.
I work hunched over a computer, but I have realized over the years how hollow it all is internally. Most people do not only have no conception of what I actually do, but also, most people will never see it, use it, etc.. My work brings no pleasure to people and is confined to live on a bunch of NAND gates and across various wires.
I think it hurt me most when I realized, on days when I have been really productive, if I just turn my computer off and look around me, then I have absolutely nothing to show for it.
I feel you, I had this problem when I worked making marketing websites for a really scummy company and realized I wasn’t making the world a better place. I got burnt out.
I got a new job, I do websites and CRUD apps still, but it’s in service of a large multinational construction company, and seeing engineers and the actual boots on the ground doing construction work really makes me feel like the stuff I’m making means something.
We get an opportunity to go out in the field sometimes and it was an eye opener, a “small” construction site may employee 800 people and have people working round the clock.
There are ways to make this work meaningful, you just have to seek it out. Wish you all the best.
A boycott is voluntary. Presumably not everyone will follow, and those who still use it will be more productive which you say is bad. Do you mean we should ban it? That itself seems impossible to do considering its a program running on a PC.
Seems silly that we would boycott a technology so we can make busy work for ourselves.
Exactly. Being a lamplighter at the time when cities were illuminated by gas street lamps that needed to be lit, fuelled, and extinguished manually was once a stable and respectable career. Wherever cities began transitioning to electric lighting which was far more reliable and brighter the role of the lamplighter became obsolete.
They organized protests, strikes, and sabotaged the electrical systems because their livelihood was threatened by a given tech.
Comment was deleted :(
What makes you correct, while the generations of your ancestors who said similar things weren't?
To support the grand-post but reword it: we're on the verge of AI becoming a "great creativity equalizer". It used to be that each individual had a "skill level" on a scale from 0-100 for each practice, where very few people were above level 60. But now everyone can be a level 60+ artist, if they want to be. This lower limit will keep rising and, at a certain point (and maybe were talking 1000 years) it will be hard for anyone to differentiate any everyday media from human- vs AI-generated.
Maybe equality is good? For utilities like locomotion, that sounds great. But for creativity?
I think what will happen is a later generation will grow up bombarded by level-100 art and music. I guess that sounds net positive...right?
As a musician, I can tell you one thing: part of what gives me drive to make music is knowing that what I'm creating is a scarce resource. (For example, if I realize one of my songs sounds like another existing song, it stops sounding good to me.) If it comes that AI can generate new songs for me, all of that value I assign to creating music is lost.
As a musician, I can tell you one thing: part of what gives me drive to make music is knowing that what I'm creating is a scarce resource. (For example, if I realize one of my songs sounds like another existing song, it stops sounding good to me.) If it comes that AI can generate new songs for me, all of that value I assign to creating music is lost.
This dilemma came up before, as photography started to become practical. Artists didn't see the point in painting what the camera could render perfectly. So instead, we got Impressionism, and everything that grew out of it.
There will always be room in art to do something different, something "out of distribution," so to speak. Great artists will still surprise us. Average ones can still have fun. Crappy ones will just have to find something else to do. Life will go on.
As for the notion of a "creativity equalizer", that's laughable. It would take a god to do that, not a computer. AI is a tool, and some people will always be better than others at using the available tools. Those are the ones we will continue to call "artists," "musicians," and "writers."
Our ancestors didn't have machines on pace to surpass human productive capabilities on every axis.
So if AI reaches that point in the next few years, machines will replace humans for all productive work.
Were they incorrect? Most preventative measures were, at some stage, speculation or moral panics.
For example our consumer laws emerged out of anxiety about poisonous food. Our libel laws emerged out of moral panics about misuse of the press. Etc. But were still good measures.
There's also the asymmetric risk element. The social risks of the technology are larger than the material risks of putting it off. Being wary of that isn't Luddism, it's just math
What makes them wrong?
I don't think generations of my ancestors made any predictions about AI /s.
I mean, an argument that "technology" hasn't eliminated work previously so we can assume it won't do so now is ridiculously broad.
I don't even think current LLMs will eliminate all jobs - I think it's limited to a particular kind of mediocre output, an output the demand for which is very elastic. But an arguments about what "AI" can do should based on it's potential, not on some "technology has always created jobs" incantation.
Generative AI accelerates the productivity of productive people, and potentially replaces the efforts of button-pusher types adding minimal value or creative input to a process.
I've seen this rough formula applied to earlier tech. But it's hard to see how, if AI/LLM usage consists of mainly in simple commands ("prompts"), it won't be about making people into "button-pusher types" or just empowering exactly those types. Indeed, one of the draws of generative AI is that it allows managers/marketers to get default mediocre illustrations for a text without having to consult with an artist or even a stock-photo agency (which peddled things made by artists - up until now).
You are correct in those examples, but I think it also opens up new avenues for creation.
I can design reasonably decent PCBs (up to 4 layers), I can hack together not-terrible code in a handful of languages from embedded-C (eg: ESP32 stuff) up to Windows or linux apps. But I absolutely suck at graphic design and basic layout. Many of my creative projects stall at the UI level, and rarely are they worth paying someone to help. I've been using genAI tools to help with design, layout, etc. things. I can also see the inverse, someone who may be good at design and can craft something that just oozes "use me", but perhaps they struggle with coding, genAI tools might allow them to use robust components, like ESP32's with attached touchscreens, to build things previously out of reach to them. This seems to me like something that can propel things forward, but potentially at the cost of a person whose previous job was mostly editing powerpoint slides on the corporate template.
Yeah but this also doesn’t read like something that pretends to get into the underpinnings of AI.
It’s about the friction that AI removes from making things and the discomfort with what that might mean. This point is fair and your take on productive people is reductive. People aren’t 1:1 factories.
I’ve been watching my spouse participate in a seriously intensive prestigious typography course. It’s quite fascinating to watch how a part of the process involves slow walking through all of the tedious hand lettering stuff. Like mr. Miyagi level training. The depth and complexity of things they’re discovering and learning at a deep level by hand is quite enlightening. It would not be possible to convey other than by this process - to understand how to design typography you basically have to practically engage with how it was in every other earlier historical period.
I believe this is the essence of the point of the article - that working through your ideas by hand involved lots of friction which was valuable in training you how to slow down your thinking. And now that is being done away and everyone can fart out nuanced BS and disregard anything that is more than a paragraph long.
As technology evolves, humans lose their capacities as their tools get better.
What's the average human's mental calculation capacities compared to our ancestors when the electronic calculator didn't exist ? Who can calculate integrals by hand nowadays ?
AI is the ultimate tool. Who needs to get a phd when your computer can do the work / solve the task better than you would do and in a fraction of the time you'd require.
Sure there's still gonna be a need for specialists but on average most humans are losing their capacities because their tools get better.
I can still do integrals by hand. AI isn't the ultimate tool, it still can't do basic proofs and it's not doing neither automated proofs using a proof assistant language. Everytime I see someone say something like this I assume they do pretty insignificant work.
> I can still do integrals by hand
good for you if you can do integrals by hand, i would bet though you're in the small minority of people who can do that.
> AI isn't the ultimate tool, it still can't do basic proofs and it's not doing neither automated proofs using a proof assistant language
I meant AI as in human level AI and above. It will for sure be the ultimate (intellectual) tool that can solve any intellectual task.
> Everytime I see someone say something like this I assume they do pretty insignificant work.
I'm having a hard time following your line of thought, although you might be right and i might be doing insignificant work compared to you.
Sorry for being too aggressive it's just I see a lot of people (managers) talking like current chatgpt can replace people at my line of work (R&D in long term economic planning) and I can't see people even begin to formulate the right questions, and never mind understanding the answers.
> As technology evolves, humans lose their capacities as their tools get better.
I don't follow this argument. Did humans become LESS capable when they invented a hammer?
Although a hammer is a useful tool, it's obviously not that advanced of a tool.
As soon as you have an 'advanced' tool that can automate things or do things for you better/faster/with less efforts, you don't need to learn that skill anymore, why would you ?
It doesn't mean the capacity is lost forever and the skill can't be learned anymore, but it's gonna be less and less frequent at the population level.
Nowadays, how many people can:
- make fire without a match or a lighter ?
- add large numbers or do math operations without a calculator ?
- reach a destination without a gps ?
- learn history dates, lists of cities / states / etc...
In the end, it's a bit similar to how species evolve. When a function isn't needed anymore, the organism tends to evolve without it.
Given that a hammer is a primordial tool, a first tool, you'd think that that switch would be the most striking at that moment. We went from having no tool at all to a tool.
I think you're mixing definitions of things. We don't lose any capacity at all, we just lose the need for a skill in our day to day life. We haven't evolved away from the ability to create fire, we just don't need that knowledge at this moment. I don't know how to hunt a gazelle, but the hunter didn't know how to drive.
Both "evolution" and "capacity" seem to be incorrect words to describe this phenomena.
That's because you're beginning with the assertion that LLMs are tools. You cannot create things with an LLM. You can't be creative with them. They are not useful for building things. What you get out of an LLM is a regurgitation of existing information, peppered with assumptions based on your query.
They're not tools.
A hammer helps you build a house. An LLM helps you read StackOverflow in the most inefficient means possible.
This is just nonsense, and obviously untrue.
It is easy to show that LLMs can create new things (“write a poem about libc”) or answer questions that cannot be googled (“can a pair of scissors cut through a Ford F150?”)
The poem won't be unique. It will follow a basic formula and make sure to pick up the words you asked it to use. You might not get the exact same poem every single time, but there's a reason that you can tell when something has been generated.
A good recent example are the lyrics of the songs on an album by a band called Saor. The album is called Amidst the Ruins.
Aren't tools exactly things that produce same result time after time? Say a nail gun. It fires a nail. CNC machine it is a tool making things, but does not itself do anything original. Hell, cookie cutter make cookie cuts. And I consider it to be a tool. One could free hand them too, but most don't...
Surely we became less capable at solving that problem without a hammer.
If we had the hammer taken away we'd figure it out again. We got better at things, but we didn't actually lose any capacity.
If we had all the computers taken away, would we figure them out again? Maybe, but how long would it take?
Take it this way: it took decades for China to catch up on technology. Now they have, and what the Western world realises is that it's pretty hard to catch up.
If your whole nation becomes mostly reliant on AI, and another nation actually knows WTF they are doing, then it may become a problem.
Our whole nation can't make computers, just specialized individuals. This is what happens to a society as it advances. We don't all need to do everything, we fit specific roles, and it's a real problem in modern society.
Suddenly people need specific degrees to do entry level positions, making it difficult to move from one vocation to another without returning to school and getting another degree or diploma. We have the potential to do those other roles, but the specialization demands focus and time. This problem has nothing to do with AI, and its use isn't going to change that dynamic.
To be fair, what has happened in modern history is that the West has lost its ability to build hardware in favour of doing software, because that was more profitable. China is slowly getting better at doing hardware in general, it's getting obvious with robotics.
We can say "this is what happens to a society as it advances", sure. But it also means that if China stops building our hardware tomorrow, we're screwed.
Enters AI: teach all your citizens to depend on AI, and they will depend on the few companies that build AI. Which are generally controlled by those billionaires that keep showing that we should fear them. Maybe that is what happens to a society as it advances, but is it desirable?
If we look at history, what happens to a society as it advances is collapsing. Usually it's painful for those who live during the transition. Turns out we are probably those who will live it.
Now if we wanted our society to be more resilient to the changes that are coming, we would have to reduce those dependencies and get back in control as much as possible. AI is doing the opposite of that.
I saying this as no fan of AI but, honestly, this again seems unrelated to AI. Building computers are not skills the average populace had, and I don't see any signs that the people who did make those decisions are outsourcing their choices to AI.
I don't know what you mean when you say "what happens to a society as it advances is collapsing". Civilizations both advanced and not have collapsed. All that being advanced did was make it more obvious when they do. This is blaming technology when time is the actual culprit.
It feels like we're blaming AI when it's really profit chasing that's killing us.
> Civilizations both advanced and not have collapsed.
Which one(s)?
> Building computers are not skills the average populace had
Which is not at all a reason for the average populace to give up on other skills, is it?
Adanced? Rome. The easter island people were not specifically advanced, but they had a societal collapse from which they never recovered. Lots of civilizations have collapsed. Are we blaming the North America Indigenous populations or the Mayan civilization of collapsing because of their technology? It could be argued they collapsed because they weren't advanced enough.
Sorry, can you tell me again which one of those civilizations you mention hasn't collapsed yet?
I'm really confused. You say: "there are great civilizations who have not collapsed, let me give you examples!" and you proceed giving me examples of civilizations that... collapsed.
Humans become reliant on tools all the time. I bet the folks in an average Amazonian tribe know a lot more about lighting a fire, catching and processing wild game, and building an improvised shelter than my excuse for camping skills. They aren’t reliant on electricity or a Bic lighter, that’s for sure.
Would I trade places with them? No.
In what way is a hammer an intellectual tool?
That wasn't the claim. The claim was:
"As technology evolves, humans lose their capacities as their tools get better."
Might be true, but I have a form of dyscalculia that made almost any sort of math impossible for me, so for this it doesn't affect me as I am not capable either way.
But AI has taken away my ability to think to solve a problem via code. Now all I do is just ask it.
> Who can calculate integrals by hand nowadays ?
Anyone who did calculus at the uni level, at least. And to paraphrase my algorithms professor, "I know nothing, but I know where to look when I need it" which is another paraphrase of Plato. Higher education will teach you how to get yourself out of a moat if you were ever to face the unknown. AI will not.
Then we need new capacities, attentive to our emergent means, and institutional frameworks capable of germinating them.
All the AI researchers I know that also happen to be parents will not let their kids NEAR tools like chatgpt and photomath.
No, it's "traditional" things for their kids that are CHALLENGING and require practice, like woodworking, learning an instrument, sports, memorizing texts for fun.
The title is semantically questionable:
- US? - AI has power to erase?
I did not create AI & believe even autonomous AI systems are for time under human moral responsibility.
The effect of AI socially obviously is that there will be no public forums, so we get back to middle ages, except the peasant is smarter & more obedient, which is a good mix for mental health issues.
Regardless, when one needs more data: 1) Pick small nation with slow democracy & bureaucracy 2) Do massive AI horde campaign targeting politicians & keeping them busy 3) Simultaneously ship SpaceX connected drones in crates & humanoid robots. 4) Now the people cannot sleep in peace & you start gathering behavioral surplus very effectively. 5) Sell behavioral data & reinvest in sociological ruin of the government before they close OODA.
The interesting step is what happens when standard baseline human has eventually no more surplus to give. I would guess models develop far enough that you can nudge the non-wallnut humans who don't care about privacy or anything to vote for candidates that then pass laws to outroot more eccentric people by cutting the rights to any sort of privacy.
I personally believe in the instrumentalist view of philosophy of science & endorse the "instrumentarian power" term Zuboff coined for this. It's the very significant difference, Zuboff probably said it also but I cannot recall as I read the work young, that would you rather have no free will and no one else's interest dominates completely over yours or would you rather have no free will and some other human have chosen what matters?
As endgame, such state makes lots of AI alignment literature paradigmatically obsolete.
> the resistance of a surface against a pen or pencil creates an important friction that works its way through your body and into your mind. That resistance is valuable; it forces us to slow down.
Beautifully said and fully agree! In my personal work, I’m always trying to find ways to slow down my thinking when the problems get hard. Even using a pen vs a keyboard is a great way to do this.
AI is quickly blurring the line between makers and curators.
If the next wave of advancement in logo-generating AI will allow us to converse with it meaningfully and work with it as a creative director might work with her associate designer, is it really a fatal flaw that the AI is not able to conjure up an editable PSD file? Is it really necessary that the AI produce a pile of sketches that led to the final product instead of working backwards from the final design to fake a trail of inspirations?
As curators we still need to slow down and take our time to ensure that the design we select is a good fit for whatever requirements we're working towards. I don't think a blazing fast AI necessarily erases that or takes that option away from us.
It's a nice essay but the premise that human intelligence and creativity will be erased because we'll have AI to do it seems a bit silly. I know a lot of people who use LLMs and it seems mostly unchanged in them.
I wonder... LLMs currently work because they have a huge corpus of data to train against. Data generated by humans.
What happens when almost all available data is computer-generated? When no new training datasets are available? Will the models feed back on themselves? Will they stagnate and cease improving?
Perhaps we'll enter a world where everything new and unique made by human authors (text, art, source code) is jealously guarded in an attempt to keep the LLMs out - lest even a single model learn it and spread it to all the others almost instantly making the original author's work worthless.
Image-to-vector software will produce an editable file out of a logo image.
And there is AI for that! Within seconds of searching I found something called Vectorizer.AI, and others.
Not sure how well they work, but the basic premise is that you could use any text-to-image AI to come up with a logo and then image to vector AI to create SVG that is likely editable.
If that doesn't work well today, it likely will in two or three years.
Technology has diminishing returns. If technology does everything for us then what's the point of existing.
That's why it's so important for R1 to show the reasoning. The creative process has to be transparent and easily understood.
What evolved intelligent life is processes that intelligent life has the power to erase.
This, but all civilized society.
Vaccines are a great example.
[flagged]
I’m curious why the author choose to publish his message on the internet vs. walking door to door to spread it.
It’s almost as if his desire to use technology was to collapse the process between idea and transmission.
I appreciate a nostalgia for process to some extent. I enjoyed my 200km walk across the mountains Corsica far more than the plane ride from New Mexico to France…
However it’s possible that you can actually become more aware of and participate in the larger process.
Are I here to spend a month hand drawing my logo and refining it, or am I here to build an AI disaster response system to save a million lives?
While I can see the appeal of these sort of arguments, and agree with much of it, I’m not sold on the value of process in itself in the larger arc of the process of creating actual value for the world.
I guess let us know when you actually deliver that “AI disaster response system” because that’s the wild leap in your logic there.
The author did not choose to express that it should all be done by hand. His point is that there is a value in slowing down your thinking by doing stuff by hand. Logos and unformed ideas being a mere example to illustrate a point.
In your case it sounds like the equivalent would be to actually go out and participate in an actual disaster response effort before reaching for AI or using AI to tell actual responders how to do their job.
Why stop at hand drawn logos though?
Crafted by Rajat
Source Code