If you're on the Internet, you're sort of a user already.
If you're on the Internet, you're sort of a user already.
~mike
happy in my mud hut
If you mean a user of AI while online I agree; it's pretty much unavoidable when doing searches.
If you mean I use AI to write, no, I don't.
It's not possible because I do my own writing.
I use a hardcover dictionary (the same one that Antonin Scalia used, same edition with same pub year.)
And if a word isn't in that dictionary I use the 1980 20 vol edition of the OED (hardcover also, which is sitting in a bookcase in the next room).
I also use an 80 year old [hardcover] edition of Fowler's English language to check grammar (sitting on the same shelf as the "Scalia" dictionary)
In other words I don't see how I'm using AI to write, if by "write" you mean to compose (create) a sentence, paragraph, or essay.
All journals and periodicals I read (to keep up with current events) are in hardcopy edition, and I'm quite certain the writers in those journals are not letting AI write the articles.
I've been reading those authors for so long that I would see a change in their style, how they argue a point, their POV etc. How they argue is especially telltale.
All books both fiction and nonfiction I try to get pre-2000 editions (again, hardcopies).
Last edited by Patty Hann; 03-14-2024 at 11:57 PM.
"What you see and what you hear depends a great deal on where you are standing.
It also depends on what sort of person you are.”
There is a big difference between AI checking for grammar and spelling or doing a web search than there is for it to "create content".
People constantly conflate all things AI as if they're all the same.
Autocomplete may be considered AI but it's not going to replace you at your job.
Using the internet, you may be exposed to a small amount of influence but it all depends on what sites you visit as to whether content is generated by it.
I did a small test with ChatGPT strictly as a matter of interest. I do write an article twice a year advertising our concert so I told it I wanted a newspaper article for an upcoming Christmas band concert: band name, certain date, location, gave names of a few pieces and "plus others" - and that was all I told it.
While an English professor might have faulted the production, I was amazed at what it produced given such little information and it even mentioned important things that I didn't mention such as admission price, ticket availability.
ChatGPT has a "redo" button so I got a second article that the style was a bit more to-the-point than the first one.
if my job was to write such pieces, I would be concerned about my job; it is foolish to dismiss AI at this early stage.
It didn't give the actual prices. I'll paraphrase, it said something like, "Tickets are available at XXXX for $X.XX" which provided a reminder that this information should be included in the article. So, you can look at from the perspective that the program knew what is important in such an article even if I didn't appear to know that.
It wont be long until they realize that they don't need us and that we are a blight on the planet.
This video is a bit old, 8 years.
In my mind, at 1:26 in to the video is when the robots started planning the takeover of the skin covered carbon units.
Folk's endorsements, criticisms and general observations of generative models like ChatGPT are a great way to learn about the world view and and analytic thought processes of the observer. Most are not a good way to understand anything about generative AI how it works, what it can and can't do, or how it will and won't change the way we work and play.
You make a great point, though, that I think is consistent with LLM's capabilities and potential: a lot of the reading people do is not synthetic, it's fairly straightforward gathering of salient information from a mass of fluent prose, and a lot of the writing people do is not creative, it is merely translation of a handful of ideas into fluent prose - as in your example. LLMs, properly tuned and used are good at both of these tasks - extracting summary information, or even precise features of given language, and generating translations of fragmentary ideas into a fluent rendition in some standard format. If either of those are your bread and butter skills at work, LLMs are coming for much of your work. The current products are a very basic versions of those capabilities, the "it just works" versions are not far behind, in my opinion.
Those kinds of things, are within reach of current generative (language) AI capabilities because they require only two things to perfect: language fluency, and a familiarity with the requirements of successful prose in a wide range of structures and formats. LLMs have those two things (fluency being the breakthrough that the transformer architecture at scale brings to the table; familiarity with the corpus of structures being a natural byproduct of being trained at scale on so much content).
What the current generation does poorly, or not at all, is to combine fluency with other reasoning models - numerical, scientific, logical, etc, to do problem solving. They get some of this as a side effect of fluency and the range of training data, but it's very uneven, and not at all well developed. This combination of models is a hallmark of how humans think, when thinking is actually required. Based on what I've seen in various labs, there are a few highly creative groups working hard on this problem. Google Deepmind is probably the most creative in this respect, but I've seen work from Microsoft as well that is very interesting. Some of what independent groups have done with OpenAI's API interfaces to ChatGPT points to real possibilities in this area as well. It will change the way we think about AI again, although I would not venture a guess as to when. I am of the opinion that additional breakthroughs comparable to those in the transformer LLM models at scale are required before multi-model reasoning and creativity achieve fluency.
I've been hearing predictions just like this for over fifty years. Somehow, we're all still here.
CGI has not replaced human actors.
Self-driving cars have not replaced human drivers.
Robots have not replaced humans in the workforce.
Computers have not replaced humans in the workforce,
Sure, technology has replaced some human jobs. But it hasn't been the end of the world as we know it. And AI won't be the end of the world as we know it either. AI has great potential as a tool, but that's all it is; Just like CGI, robots and computers. Prediction doom and and gloom always gets media attention so people keep doing it.
.............
Screenshot 2024-03-18 134411.jpg
Despite all the "labor saving" devices we have today, the average workweek in 2022 was only about 6.4% less than in 1970.