Is there an ethical way to use a dictionary? Yes to both. And both could be citing where their information comes from (words are discovered in the wild, taken and used without permission), but much easier for AI companies. So why don’t they?
To clarify, are you referring to the LLM training data or to sources cited by LLMs when giving an answer? Because I agree that the first one AI companies could disclose while the latter is subject to "hallucinations" and not always reliable.
This is a good start. AI ethics also goes beyond these use cases. You have to be careful about the use of deepfakes (audio and video), the automating your thinking, using private data, and respecting cultural and ethical differences between societies.
For sure, I agree about that the broader ethical landscape is much bigger. My post is an attempt to zero in on very specific do's and don'ts for the individual user. I do mention limiting the use of private data in the LLM section, too. Thanks for the additional aspects to consider!
AI has immense potential but comes with ethical challenges. Following responsible practices ensures a positive impact. Reflect and adapt as the landscape evolves.
I'm definitely an AI optimist, a believer in AI as a copilot, augmenting our knowledge and skills. I like your outline of how to use it with a level of responsibility/ethics. My only additional thought would be that I don't like sharing any text created by AI if I have not been able to verify the accuracy of its content, either by knowing the subject well enough myself or taking the time to check citation links.
For sure, that makes two of us. I actually did mention the "verifying facts" and "not publishing without human oversight" as guidelines for that very reason.
I generally haven't ever published any raw content from LLMs, but that's mostly because I prefer to write things in my own voice.
You can make the point that AI lowered the entry barrier for shitty people more than any other technology, but that'd be an argument about the extent of impact rather than a difference in the fundamental truths here.
What's interesting is that, when I wrote Paradox, my apocalyptic novel about AI, I found that the fastest way for AI to kill humans was.... to just give humans the excuses to do it themselves.
I tend to draw a similar conclusion, that we're in a prisoner's dilemma of sorts, and much more AI is inevitable. On that basis, it's really important to understand these tools now, because as big as they are, they're gonna be 100 times bigger in the next decade.
Navigating the ethics is something we have to constantly think about, and we need to evolve ourselves so that we can continue to understand discussions at the frontier. And hey, I could not have set up a better softball pitch for you here, because that's exactly what you do here.
This one: “Don’t use LLMs to spin content wholesale without adding value.” IE fucking try, dudez. I’d argue that not adding something unique or of value IS A SCAM. Even if it’s just sharing pretty basic info, at least combine multiple variants or something. Don’t be boring.
Is there an ethical way to use a dictionary? Yes to both. And both could be citing where their information comes from (words are discovered in the wild, taken and used without permission), but much easier for AI companies. So why don’t they?
To clarify, are you referring to the LLM training data or to sources cited by LLMs when giving an answer? Because I agree that the first one AI companies could disclose while the latter is subject to "hallucinations" and not always reliable.
This is a good start. AI ethics also goes beyond these use cases. You have to be careful about the use of deepfakes (audio and video), the automating your thinking, using private data, and respecting cultural and ethical differences between societies.
For sure, I agree about that the broader ethical landscape is much bigger. My post is an attempt to zero in on very specific do's and don'ts for the individual user. I do mention limiting the use of private data in the LLM section, too. Thanks for the additional aspects to consider!
If you voiced those principles over the top of The Sunscreen Song, I’d pay good money to hear it.
There's my retirement fund!
AI has immense potential but comes with ethical challenges. Following responsible practices ensures a positive impact. Reflect and adapt as the landscape evolves.
I'm definitely an AI optimist, a believer in AI as a copilot, augmenting our knowledge and skills. I like your outline of how to use it with a level of responsibility/ethics. My only additional thought would be that I don't like sharing any text created by AI if I have not been able to verify the accuracy of its content, either by knowing the subject well enough myself or taking the time to check citation links.
For sure, that makes two of us. I actually did mention the "verifying facts" and "not publishing without human oversight" as guidelines for that very reason.
I generally haven't ever published any raw content from LLMs, but that's mostly because I prefer to write things in my own voice.
This is very much the same challenge as every technology.
Social media is great but.
- people use it to bully
- people use it to scam
- people use it to extract revenge/manipulate/lie/cheat/steal
Craigslist is great but
- people use it for prostitution and drugs
- people use it to scam
Telephones are great but
- 90% of the calls I get are spam /scam
The printing press is great but
- people use it to spread disinformation
- people copied other's work and presented it as their own at scale
- it's easier to print something and forge a signature than to forge an entire letter
The XXX is great but
- people will use the technology to do unethical things.
- people will make tools that ride on the technology to do unethical things.
Frankly, I'm not worried about AI, I'm worried about people
Agreed. People gonna people.
You can make the point that AI lowered the entry barrier for shitty people more than any other technology, but that'd be an argument about the extent of impact rather than a difference in the fundamental truths here.
What's interesting is that, when I wrote Paradox, my apocalyptic novel about AI, I found that the fastest way for AI to kill humans was.... to just give humans the excuses to do it themselves.
https://amzn.to/3WV7f0q
I tend to draw a similar conclusion, that we're in a prisoner's dilemma of sorts, and much more AI is inevitable. On that basis, it's really important to understand these tools now, because as big as they are, they're gonna be 100 times bigger in the next decade.
Navigating the ethics is something we have to constantly think about, and we need to evolve ourselves so that we can continue to understand discussions at the frontier. And hey, I could not have set up a better softball pitch for you here, because that's exactly what you do here.
Yup, that's pretty much it.
Thanks for the softball, I'll take over from here:
"Navigating AI is hard. Why Try AI makes it...uh...less hard. Endorsed by top voices like Andrew Smith."
This one: “Don’t use LLMs to spin content wholesale without adding value.” IE fucking try, dudez. I’d argue that not adding something unique or of value IS A SCAM. Even if it’s just sharing pretty basic info, at least combine multiple variants or something. Don’t be boring.
Yup. While most of us think "Nice, I have a tool that helps me level up my game," some will go "Nice, I can just copy-paste stuff now!"