Messages seem to be disappearing from my "sent messages" folder. Two today - no record of them, I think it has happened randomly before as well.
One was the first message in a chain sent by myself, then another which was a reply.
If the recipient deletes it does it go from my sent folder as well ?
I think the short answer is that we're smarter than men.
The long answer involves the fact that men are so good at dehumanizing people, and the reverse also appears to be true: they're more willing to believe in the personification/consciousness of a computer.
Men are stupid and gullible, and willing to take anything on faith if they hear it from an "authority": Donald Trump, Elon Musk, religious leaders, AIs created and/or popularized by tech bros.
Women tend to be more critical and we don't, en masse, subscribe to a hierarchy in the way that men do, so we critically dissect information regardless of who it's coming from, unlike men.
I don’t get the hype. Every time I’ve used AI to summarize something or explain something it’s been awful. Massive gaps in knowledge. I’ve given it two different equations and it tells me they are the same. I have to specify what a variable is three times in one prompt for it not to forget. I’ve asked it very specific questions about a niche topic that is still easily searchable online and it gives me egregiously wrong answers. And that’s what I don’t like about it. It’s so confidently wrong. Reminds me of a man. I feel like my ChatGPT is a totally different model than what everyone else uses. So I have no idea how people are getting this dumb little robot to write entire papers for them, because I can’t trust this thing to do the simplest tasks.
Yeah you have to be really good at detecting its nonsense if you want to use it effectively.
That is the problem as an entire generation is raised with it, people will lose the skill to detect nonsense. They will lose real-world reference points to help ground the truth.
And there is no way to trust AI companies not to adjust information to suit an undisclosed purpose -like they use algorithms.
Right? I was toying around with it today, having it rewrite a Facebook post, and its ideas were hilariously artificial. It just does not talk like a person. It talks like advertising copy.
Edit: I didn't even post the AI content, I was just looking at it.
I always like to share this story when this topic comes up because it is fitting.
“The Machine Stops” by E.M. Forster
As I’ve aged, I’ve found myself less likely to use many new innovations (not all of them, but many!). I use a knife instead of a food processor; a teapot instead of the microwave or a coffeemaker; I have a kindle and I keep buying Kindle books but I never read them - I always go to my physical TBR pile; hand sewing over a sewing machine, even. I have taken up gardening to grow my own food and prefer to bake things from scratch.
And I really, really find joy in the hunt of looking up information and full on research, and then weaving it together into coherent writing. I don’t want to use AI because it takes that joy away.
The main thing I can’t seem to break is my addiction to my smartphone, but I’m working on it. Oh, and I still wash dishes in the dishwasher because washing dishes ruined my hands and no one else in this family would be willing to join me in the handwashing. I did wash them by hand for a long time. I even washed my laundry by hand for a while, but I’m slowly switching to to wool, which needs significantly less washing.
None of this is from any kind of holier than thou approach; I just find that the meaning actually is found in the work, not the finished product.
The meaning is found in the work - absolutely agree!!!
Reminds me of Scottish Wool Waulking. A group of women sitting together at a table singing as they pound and press the wool to clean it of impurities. Sure the wool is the "goal" but that sense of community and shared purpose is probably. the real point.
Love this comment, and agree. The whole My Dinner With Andre 'electric blanket' speech feels so painfully relevant, the way so much of our modern lifestyles geared towards comfort and convenience and entertainment at all times ends up cutting us off from all of those little tactile things that ground us in the world, in our surroundings. That make us spend time in the world, feeling it.
The whole My Dinner With Andre 'electric blanket' speech feels so painfully relevant, the way so much of our modern lifestyles geared towards comfort and convenience and entertainment at all times ends up cutting us off from all of those little tactile things that ground us in the world, in our surroundings. That make us spend time in the world, feeling it.
Yes, this is exactly it! I’ve never seen that movie but now I’ll look it up.
I think a large part of the rootlessness people are feeling, that’s pushing them to seek hedonism and desperate actions like transitioning themselves, is because they are so disconnected from the physicality of life.
Edit for typos
Absolutely. Trying to cocoon ourselves away from anything hard or uncomfortable ends up leading to dissociating.
The whole My Dinner With Andre 'electric blanket' speech
I haven't heard of it before, looked it up, and couldn't stop laughing.
"Well if you use an electric blanket you become insulated and selfish. When you're cold and put on a regular blanket, you know you're cold and you want to help other people who might also be cold. But when you're cold and reach for an electric blanket instead, it brainwashes you with its technological powers into forgetting that you're cold and therefore you don't realize others might be cold and you don't help them." (Also "it might electrocute you"??? From all the non-live contacts near you??? Or I guess it's the distant, protected live ones... that exist on every electrical product ever that needs to be plugged in???)
As a woman who now sleeps longer and better than I have my whole life because of my heated blanket, which was introduced to me by a friend who recognized I was typically cold like her and empathized and offered me one (and I miraculously somehow haven't been killed by it... yet!), all I can say is "lol"
Your points are almost word for word the same ones put forward in this scene by the other character, however aside from considering that the movie was made over 40 years ago and safety standards on products were different...actually no, electrocution, burns, and fires do still happen with modern electric blankets, with multiple deaths and many injuries resulting every year, today. Maybe your particular one doesn't carry that risk due to its design, but it's not actually a baseless concern and most come with a list of warnings of what not to do, often including 'don't sleep with them on.'
That said, this whole convo...isn't about electric blankets except in a sort of symbolic way/segue, and that little quip about fear of electrocution isn't intended to be taken as much more than that. For me personally, one similar such example is living in a house with an HVAC unit where I can set and forget the thermostat and maintain a steady temperature all year round with no effort or impact and be detached in that subtle way from really feeling the changing of the seasons in my home beyond the electric bill fluctuating. It's comfortable, it's convenient...and it's hard to explain how much I miss living in a home with no air conditioning, heated with a wood stove, while fully remembering how uncomfortable and inconvenient it was at times, that something important has been lost in my life by moving away from that. That might not be something you personally relate to, but guaranteed a number of other posters here get it.
Someone told me their apartment was cold, and I want to give her an electric blanket, but I'm still figuring out a way of doing it that won't embarrass her. I'd told her my apartment was cold, and asked if hers was.
I don't see why it would embarrass her! Especially since she said herself she was cold in her apartment. It's kind of you to want to help her!
If you're worried about buyung her one coming off as patronizing or something like that, just recommending electric blankets and explaining how helpful they are would convince her to get one for herself!
I'm pretty sure she can't afford it. I could barely afford an electric "throw" myself, much less a whole electric blanket. I had an electric blanket, my cat pawed it to the point of no return, then I got the throw.
But thanks for your suggestions, they help the wheels turn, as far as my "cover story," get it blanket, cover. :)
Oh I'm so sorry to hear that!! Mine was a gift so I never realized how expensive they were :( Maybe secondhand ones are a little cheaper?
Of course, I hope she gets a blanket one day! And I adore the pun, 10/10 :)
I don't think there actually IS such a thing as a "second hand" electric blanket, due to safety issues (liability for the thrift store). Most don't sell medical equipment of any kind (walkers etc) because of liability issue.
"I think that that kind of comfort separates you from reality in a very direct way."
Wow, that sums it up. Thanks for mentioning this interesting scene.
I wonder if they're including things like deep fakes in AI usage for males. Because, obviously, 99.99999%-100% of women don't use AI for that, if they even use AI at all.
I tried using AI a couple times ever just to see what the hype was about, and it just hasn't been helpful for anything. I can write things myself and read things myself. I don't get the hype and it's actually been really annoying to see ads for it constantly. Also, not to mention, I see TONS of AI-generated videos on YouTube and Facebook reels and other places like that. They are very obviously AI generated due to the voices and images they portray, but I recently also saw a video discussion about how people now even pay for specific programs to make higher quality AI-generated videos for very quick and easy money, and the voices they use for the narrations in the videos are actually starting to get more and more realistic. I couldn't even tell some of those ones were AI.
I don't like it. I don't like any of it. And I know it is only going to get worse from here. The big thing I hate #1 is how it is used to harass and abuse women and children, there are young boys using AI to undress girls in their classes. Adult men are using AI to undress and make porn of random women they know or don't know. It fucking sickens me and actually makes me completely fucking irate. I don't understand the depravity of men. And #2, I just can't stand seeing all this AI bullshit in general, I can't click on anything without "AI summaries" and ads for "AI services" and whatever other bullshit.
The naive answer to the literal question is probably that women are too busy dealing with real life nuisances and don't have the time sit around playing with the shiniest new toy. The article sub-head states "AI’s early adopters are disproportionately men"-- is there any new technology where the early adopters are not disproportionately men? It's easy to be an early adopter of something when you can spend time playing with it, experimenting, kicking the tires, trying out different things to see what it can do because you have a skivvy to do boring stuff like obtaining and preparing food, picking up and doing laundry, checking the mail, whatever.
Gen X men seem to fucking LOVE AI. I went on a date with a guy who admitted ChatGPT wrote his dating profile bio, and he was using ChatGPT to explain to him every concept I introduced as a topic of conversation that he hadn't heard of. He also used ChatGPT to "prove" how likely we were, with our attachment styles, to have a successful relationship. I was doing the slow fade on him already but he made me roll my eyes all the time until I finally ghosted.
It's so beyond weird seeing how many people treat ChatGPT like it's some kind of oracle or authority figure to appeal/defer to.
Its really frightening. As a STEM (in a couple of fields) theres basically no chance of us ever creating a human-like artificial intelligence, for at least a few hundred years. Theres a huge risk of us creating something that a majority of people can be convinced is a human like intelligence, and then treat like an oracle/god which is infact ultimately owned and controlled by a tiny minority or even just one person.
The "advances" in consumer level AI arent about making it better, theyre about making it more convincing. We should be afraid of that.
Once you have something that people are convinced has human-like thought, is benign, and has access to almost all of humanity's history, art, philosophy etc, why wouldnt you feed it a bunch of political questions and then just implement the answers verbatim? On what basis could you, a single human with blind spots and bias, argue against the condensed wisdom of every human, living and dead, filtered through a benevolent and neutral super intelligence?
For right bow we can point to times when its told people how many rocks they should eat, but once those obvious tells are manually erased, what will we point to to show people they shouldnt rely on it?
Absolutely to all of this.
Personally, I've been a fly on the wall for some companies developing self-driving vehicles, and heard them outright say that they were going with less reliable components from companies that had better legal protections in place over components that were less likely to cause accidents but were from companies that had less robust legal teams. These massive corporations aren't our friends, they care more about covering their asses and turning a large profit than if they get people killed.
But a big part of what I worry most about is that when people are led to believe they can outsource their thinking to this thing that's supposed to be more intelligent than us, they'll just stop thinking for themselves. That's a tremendous amount of power to hand over to (?)...whoever ends up taking it, the men behind the curtain. That, and the sheer VOLUME of unreliable, mediocre crap-presented-as-elevated-truth-from-a-higher-power that AI can produce; it's enough to bury actual good sources, good data, expertise, real nuanced understanding of issues under unnavigable expanses of misinformation. Even if you have someone in place who's supposed to be editing and fact checking it in good faith, it's often harder to effectively catch all the mistakes in someone else's work than to just write it correctly yourself. Things slip through the cracks.
Big fan of the Enlightenment (many aspects of it at least), and in so many ways this feels like the opposite of that.
It's almost like the tech sphere's embrace of "postmodernist thought."
The big aim they've championed for years now is 'disruption.' The goal of creating products that break the way things currently function and force adoption of that new product. Collectively, that contributes to entropy on a societal scale.
Men are naturally submissive and they need to accept this fact.
Agree. They want to be told what to do. My gen X marine coworker is a shining example of this, I have to hold his hand through anything that isn't just doing the basic part of his job.
They're much more obsessed with hierarchy, women tend to lean more towards community.
I actually genuinely agree with this. They want to know exactly where in the hierarchy they fall and men never seem to grow out of wanting a mommy to take care of them, save them, and tell them what to do. Some just grow to resent those things about themselves and respond to those insecurities in really toxic and dangerous ways.
Yes. They enjoy jockeying for position as well as having a guru or leader. I think women are more likely to fall for girlboss or bestie manipulations such as MLMS, but are also doing so often to try to do it all or be independent business owners. An attempt to better herself, not typically a bid for absolute domination or whatever.
But men are more likely to find gurus or male leaders and will defend them to the death. Are manosphere things just the Lularoe of maleness? Is it pyramids all the way down?
He sounds disgusting. I think I would literally have walked out of that date. Not even kidding
This word is overused but I'm pretty sure he was a narcissist. He was pretty good at mirroring which is what got me on the date. I figured it out before long thankfully.
Jesus, he may as well use an origami fortune teller to run his life. Why would anyone put so much stock in ChatGPT?
Because he was a vapid idiot trying to grandstand as an intellectual
Same as the reaaon men are more likely to lose money in gambling and get scammed; theyre just not as canny as we are. Whether women are more biologically risk averse, or men are socially protected from the consequences of terrible decision, the statistics are in; women just generally make better choices and sniff out bullshit better.
I still don’t see why I should use AI. The common uses I’ve seen, write a car commercial in the style of Emily Dickinson, hur hur, or composing emails, are dumb at best if not inferior. There may be ways in which I could use it productively, because I realize in theory AI could be useful, but I don’t know what they are and I have enough to be getting on with.
All the writing ive seen produced by ai has a hollow, bland quality to it. Like a highschool essay thats going for word count and based on skim reading the wikepedia article. I always feel bored by it even if im interested in the topic.
Yes--and no one is going to read it. Who wants a communication from someone who didn't bother to write it themselves? I'm picturing a future where people shit things out with ChatGPT and other people put it back into ChatGPT to get a TLDR and have their screen reader read it out loud to them.
Skimming a text, scraping the Internet, it's basically the same thing.
It's good for organizing mass amounts of information and outlines, I do like it for that. It's bad at creativity. Which is fine, I just think the place for it is more organizational anyway.
For AI developers:
Organizing mass amount of information and outlines = intelligence
It's at least a time saver. I have a job in which I am often given a bunch of very poorly written text and need to make it make sense. You can't rely on AI in its entirety, it still hallucinates. But between that and Notebook LM, I can make a basic outline, and then go over the text manually and expand upon it and make sure that everything lines up.
Not sure that makes it intelligent, but it can at least be more helpful than a shitty intern.
So the machine learns, and not the shitty intern. So what's the end result in 20 years.
Ideally that I get to retire without continually babysitting nepo hires.
Realistically, I think AI will be throttled and chips will be Nerfed and people will have to purchase modules or 'software' for specific purposes, with vetted information banks.
Information is not intelligence. The input to these models is information, with the theory that optimization based on erroneous deductions will eventually serve to 'teach' (input testing is called training for a reason) an algorithm how to 'think'. This is a plug and play perspective on human intelligence that is just so male. It's along the same line of thinking as artificial wombs when we don't even understand what causes some women's reproductive issues. We can't prevent major issues in pregnancy and we're still losing mothers from childbirth complications.
To me it's just galling hubris. Garbage in, garbage out. While everyone loses their ability to differentiate between garbage and non-garbage.
I dunno, maybe they have enough natural intelligence they don't need the artificial kind.
I have real confabulating toddlers to have frustratingly circular conversations with; I don't need the aggravation of the fake ones, too.
Guess this is a bit like my post earlier today but:
Perhaps they don't use it because it doesn't work for them? It outputs BS answers often when it comes to homosexuality and women. The text is produces is a lot of rubbish.
That said, I still use it for guidance sometimes, in the knowledge that it is far from perfect and even gets things like basic sums or unit conversions wrong. You have to know your stuff anyway to use its output.
I just don't see what it offers for non-creative situations. When it started getting big a few years ago I went into it with an open mind, but it's just not helpful and not worth the costs or the risks. I'm forced to use it for work in certain situations, and it's not convincing me it's getting any more helpful.
And AI for creative uses is offensive to me on a level that I can't even put into words.
edit: I guess to sound less like a Luddite, I think there are definitely places where AI is going to be helpful. I'm aware of its use in things like land monitoring for agriculture (evaluating the health of plants from sat imagery) or environmental monitoring (again, analyzing sat images more quickly than a human ever could) or radiology (flagging imagery for cancer, etc, to help reduce the workload of radiologists), and for cases like that, where it's supplementing human intelligence, I think it's revolutionary, has enough benefits to outweigh costs, and is the future.
But I don't see the benefit of using it in place of regular searches, or drafting emails, or summarizing reviews when shopping, or low-level, daily usage, and I think in those cases, probably 90% of cases, AI is a fad that they're trying real hard to make happen but is actually a bubble that's going to burst.
to sound less like a Luddite
Theres some great videos out there on the luddites https://youtu.be/wJzHmw3Ei-g?si=rOKjPbR8q4fm8Kew
Please learn the real history, because there are so many parallels. AI, like most automation under capitalism, exists to make the workforce more exploitable not to lower workload or provide cheaper end products. The luddites werent scared of the big bad machines, they probably helped build them. They were pissed off about the exploitation that resulted from private ownership of the innovations that are actually, tangibly, created by the same workforce they disempower and exploit. Just like AI art cannot exist without the artists it will now deprive of both income and credit. We should be furious and vengeful.
And AI for creative uses is offensive to me on a level that I can't even put into words.
Same. Although I'm trying.
AI is a fad that they're trying real hard to make happen but is actually a bubble that's going to burst.
Reminds me of VR. It hasn't, still, taken off, has it?
I think it might make things worse for radiologists. To make it not miss things you would have to make it over-call everything and bring on either a lot more second opinions or a slew of unnecessary biopsies with their complications.
From what I understand, the way they work in radiology is that they're trained to err on the side of false positive rather than false negative, and they're just used to filter out obvious negatives, and the radiologist makes the final call.
Like, if the radiologist normally sees 1,000 images in a day, and out of those there are 30 cases of cancer, the AI would first be run over the 1,000 images. It would filter out 700 images that absolutely, unquestionably do not have cancer in them, and then the radiologist would only need to look at 300 images to find those 30 true positives. The AI wouldn't fully replace the radiologist, just reduce their workload, and since the radiologist is still making the final call, there isn't an increase in unnecessary biopsies.
I've seen models like that in other medical niches along those lines, not just radiology, and from what I've seen of the results presented, a well-trained model really can significantly reduce the workload of the technicians without putting patients' lives at risk, which, considering the shortage of qualified medical professionals we have, does seem like it's a case where AI does offer a legitimate benefit to society/isn't replacing human workers, just lightening their load.
considering the shortage of qualified medical professionals we have
But why is there a shortage? Doctors need to make money, and if the payout isn't there, in terms of med school, student loans, caseload - and part of the "payout not being there," at least in the US, is how much money is going to insurance companies - and other factors - for instance, female doctors not wanting to work the crazy hours male doctors have traditionally worked - the AI IS in a sense "replacing human workers." It's just not obvious. If 95% AI and then only 5% human radiologists works, meaning there aren't a bunch of malpractice lawsuits, and you don't have patients dying right and left - then yeah, you are getting to a point where radiologists are losing jobs, not being hired.
Or, for people with crummier insurance, or no insurance, they can sign a release that they are okay with accepting results that have NOT been reviewed by a human. It's better than nothing, and for millions of women in US, if the choice is having a mammogram reviewed by AI (so-called) for $30, and getting SOME review, and not having a mammogram at all - because they can't afford it - they're going to go with 100% AI-provided testing/review.
That kind of thing I can get behind. AI is a tool like any other, we just need to be shrewd how we use it.
I'm fine with it simmering down. It's good at organizing info, and it can do some time suck activities that I don't want to do anyway. I want it to start doing more tedious tasks and pave the way for more creativity, I do not want it to attempt to machine learn its way to creativity.
I've played with some of the fiction AIs, and it's... bad. It will occasionally give me something interesting, but for the most part it's so cliche and unoriginal that it's just putting out slop. I do worry that the slop will become overwhelming, but I hope that in that outcome people will be more discerning and find new human creators to indulge in.
There's a look and feel to it that our human pattern recognition can typically suss out... for now.
There's a look and feel to it that our human pattern recognition can typically suss out... for now.
They're so very formulaic. If you spend any time with a conversation-style or writing-style bot, you can immediately spot social media comments that are bots. It's actually what's helped me wean myself off Reddit, seeing that a solid 10%+ comments are now low-quality bots ... that real, gullible humans then interact with. Really horrifying glimpse at the future of the internet =\
Maybe soon AI will be able to flag AI comments? That'd be nice. The plug-in I crave.
AI reminds me of trans humanism.
Archive link: https://archive.ph/ghBDr
There's so much hype around AI, and I think some of that will die down eventually. Some of it has been here for a while and isn't going away. We'd be smart to know how to use it effectively. The article makes good points about "you won't be replaced by AI, you'll be replaced by someone who knows how to use AI" and potential productivity gains.
Still, I'm having a hard time getting excited about it, especially when so many of the AI results at the top of my Google searches have produced useless bullshit or even outright wrong, ridiculous information.
I don't agree with this observation. It’s therefore necessary for us to entertain an inconvenient prospect: If women are more risk-averse and fearful of technology, at least on average, and if an appetite and willingness to adopt new technology is a precondition of being able to thrive in a brave new labor market, generative AI could feasibly exacerbate the gender pay gap.
It has been shown throughout history women will adopt new technology when it serves a purpose. Industrial machines, typewriters, word processors, office computers on and on, were wholesale adopted by women in the workplace.
I've not read it yet but I hope it says a) because women are too busy in a real world with blood, sweat, toil and tears b) we can already see how men are using it to disguise themselves and re-write history to their benefit
I don’t like ai. I don’t like the idea of a machine doing my thinking for me. My dad loves it, so, so much.
Like the article says, women are mor conscious of the risks involved. I hadn't even though of the cons that this article listed.
Personally, I started using chatgpt when I heard everyone else was using it. Then some free in-person training was avilable and here we are.
Ai? You mean mediocrity engine?
"ME" - yes, that about sums up the attitude of average AI developer. They say "you, you, you" how their product will help YOU - but really it's about them, "let's make a product that will help ME make a bunch of money, Jeff Bezos or Bill Gates level of money."
"Certainly my Mediocrity Engine will propel ME to new heights of wealth!"
Anyone who wants to make it so that one person can do the work of five is the enemy of little guy.
Thanks Artificial but... I'll always be old-school ... and use my own intelligence.
I think of it like a calculator. Even ppl who are good at math still use them.
The other thing about calculators is that you still have to know what you're doing to get anything useful out of them; you can't just go pushing random buttons and expect to have coherent calculations performed. The calculator doesn't know which number is the hypotenuse of the triangle you're working with. The same is true for AI, but most people don't seem to think that. Like you can't just ask it anything and expect to get a 100% accurate answer, you still have to have some idea of what you're doing and what a right answer should look like in order to gauge if the statistical model has produced nonsense or not.
One poignant example of this was illustrated by a YouTuber I watch who bought a book on foraging that was "written" by AI. The information would look to a lay person as if it was helpful, but in reality the vague but technically grammatically correct descriptions of plants would get you killed because it wasn't specific or detailed enough to actually distinguish between edible plants and deadly poisonous ones.
I was at one time trying to figure out what blender I should buy for myself and someone suggested asking AI. I'm like... "I doubt very much that it was trained on the latest products and brands of blenders and their relative strengths and weaknesses." Like people have really kinda silly ideas of what AI can do, and I guess maybe it's just me with my computer science background that has a decent idea of the limitations of these giant statistical models in relation to their training data.
Yes! It's like my engineering mentor taught us - computers are only as good as the data you put in, and results are only useful if you can understand them and be able to critique their accuracy. That's why he made us do everything the models do by hand before ever using the computer.
Given your background, how far do you think this stuff will go?
I'm not sure what measure you'd use for "how far" it will go, but my sorta cynical answer is that people will push it until they start getting sued for the fallout from the wrong answers it produces (ex. poisoned by bad foraging books). It also depends a lot on the cleverness of the people designing the training algorithms and interpretation algorithms, but it can only go so far as what can be represented by 0's and 1's and the operations that can be performed by comparing them. Many, many things in this world are not represented in zeros and ones (yet?) and thus the AI cannot "know" about them to "consider" them in producing an "answer". Also, when producing an "answer", the AI doesn't really have any actual "understanding" of what it's doing, just that according to its calculations on the numbers sourced from crunching the training data and "context" computed from the prompt, the word "next" is 95% the best word to use next after the last one. Well, this is for the ones with language models, at least - which are the ones everyone seems hyped about. Granted they do have some grammatical rules to abide by so that they don't just produce a string of buzzwords with no sentence structure, but the rest is just probabilities based on some programmers ideas of how some information should be classified in reference to other information.
If you trained an AI on news articles entirely favourable of Elon Musk, for example, and then asked it about Musk, it'd probably crunch out some shit like "Musk is a brilliant entrepreneur who revolutionized the entire enterprise of going to space!" The AI can have no idea of the truthfulness of this, only compute a number based on whatever "truthfulness" function the programmers included.
I guess it will go as far as people are willing to say that something has been accurately represented by a collection of 0s and 1s. But given how much people disagree on what is an accurate representation of anything...
Thanks for the insight. I've heard that AI generated info will become the info that feed AI and that could get wild. Have you heard of this?
I think this was mentioned in a thread hereabouts not that long ago, and it is a bit of a frightening prospect. It'd be up to the folks who design the training data collection algorithms to pick "reputable" sources, or design a metric to calculate the probability that some source has been AI generated and disregard it (ugh, train AI on distinguishing between AI generated content and that written by actual humans and you've just potentially trained it to be able to write more indistinguishably from an actual human), but that seems like an uphill battle with the rate at which information is added to the internet and databases in general. Plenty of non-AI sources are already super questionable, and if the AI trained on that and produced more words about the questionable shit that another AI eats up and decides has an even higher "truth" probability because there are more "sources" to cite for the "fact" of the questionable shit... yeah, that's a recipe for the proliferation of questionable shit.
You'd hope that the people responsible for training data selection would take this into account, but it's possible that AI writing could be undetectable by either man or machine, in which case the only sure way to know it wasn't made by AI would be to be physically present watching the person produce it...
But to me it's a very similar problem to what's already going down with just regular humans in certain fields that like to pretend at being scientific, but are constantly having terrible replication crises with their studies. People cite the bunk "science", and the more it's cited, the less it's questioned, and then whole shaky precipices of "knowledge" are built upon what turn out to be a certain personal quirk of data "analysis" of a dude who can't even figure out why his wife is unhappy that he never takes the garbage out. Some humans are arguably worse at understanding and detecting BS than AIs who have no concept of it in the first place :P
Also reminds me a lot of the the whole circular citations with the WPATH and stuff, just on steroids because AIs can "type" and publish a metric shittone faster than any human.
It could also turn out to be the most giant game of "telephone" ever played, where something perfectly reasonable gets garbled by an AI, then garbled again, then garbled again, until one day a kid in 3rd grade asks an AI what percentage of the earth is covered in water and gets back what percentage of a map created with a Mercato projection is water presented as if it weren't skewed by the projection. Or something worse.
There's a bunch of markov chain generator subreddits that I occasionally find hilarious, but male deity preserve the AI that might train blindly on them... or any of the rest of reddit, for that matter :/
Yeah, you'd hope that the AI developers would take this into consideration, but it could turn out to just not be feasible to prevent AI inception... or they could work out a neat solution, I'm not sure.
but it's possible that AI writing could be undetectable by either man or machine
I'm human and female. I don't use "man" to refer to humans - I was just reading some poetry by one of my favorite authors (now deceased) and she refers to animals as "he" all the time - so with an ink pen, I cross out (scribble out) the "he" and write in "she." It DOES change the sense and feel of the poem. I do with this will all kinds of books - fiction and nonfiction.
Because I do this, when I write something, there's going to be a different flavor to it, or tone, and the scenarios and intereactions I dream up for my characters, DIFFER from those of someone who just uses "man" to refer to "humans."
AI can't capture this - and all the other - microscopic - ways that writers play with language.
It could be that women are able to spot AIs a mile away, just like we can spot a man pretending to be a woman, but I reckon men will be just as inclined to dismiss this in considering solutions as they are to dismiss the idea that we have good instincts for spotting sex mimics.
Also, if it's to be done at scale, the different flavour needs to be encoded statistically. And once that's done, theoretically you can mimic that statistical distribution so that what the AI has written appears to have the same nuanced and microscopic plays with language that a human writer could add. They've already done something along these lines with the different "temperatures" you can ask the AI to write with, such as formally, sarcastically, and that sort of thing. It's by no means perfect, but people wouldn't be so hyped over it if it weren't pulling off a decent impression of these things.
So it's possible that I never intended to include all of humankind when I said it could be undetectable by either man or machine, and maybe I was just thinking of that one quote something like, "the best laid plans of mice and men..."
AI is not like a calculator. AI can do the entirety of a student's homework for them in seconds while they're busy watching tiktok videos.
This. Also maybe worth pointing out that the ubiquitous use of calculators in school has contributed to a hell of a lot of kids who graduate without being able to do the most basic of math without using one.
I don't like the tool because its already been proven time and time again that the AI gets things wrong. Lots of things wrong. What we put in is what we get out of it. Its programmed by men so all the usual misogyny and worse in there.
Hmm. Daughter knew a guy in her Masters program that was using Chat GPT to see if he could cheat. She didn't.
I use AI all the time. It’s replaced google for me. Let’s just say I don’t miss the SEO infested search results lol
Personally, because I hate it and see it as one large step towards the further enshittification of humanity, turning us into big lame useless stupid lazy dependent adult babies. I particularly hate seeing it used to replace people in creative applications; it's offensive to my views on what art is for (to communicate with each other, to see through each other's eyes from different perspectives, etc). I see it as facilitating a push towards a profoundly disturbing, stupid dystopia where we're discouraged from developing many of the qualities that make humanity worth a damn and have real effort and skill in those areas devalued, can't trust anything we see or hear to be real, inundated with convincing-sounding 'information' produced first and foremost to be appealing, further alienated from each other, reduced to mindless button-pushers and consumers, being pushed towards living our entire lives as a sort of extended childhood plopped in front of a screen being entertained. And because the enormous potential for misuse in myriad ways is staggering and goes beyond even the insidious, society-breaking things we can easily predict.
Not to put too fine a point on it, haha.
Editing to add: not to mention how, in a time when we need to be making some hard decisions to curtail our carbon emissions and resource consumption, AI/blockchain demands a truly shocking amount of energy+resource consumption to power it, to the point where it makes anyone claiming to care about climate change but championing widespread AI adoption a massive hypocrite (alongside crypto, relatedly).
Convenience has hidden costs, almost always. I believe AI has a higher cost than I'm willing to pay.
This is a great point, and I'm a little annoyed by the author's assumption that women are just scared of AI or irrationally distrust it. Just like the way we spend our money vs. the way men do, women reckon more realistically with the costs. Men shoot first and (maybe) think later. That's why the world is in such a mess.
🔥