• Hi all and welcome to TheWoodHaven2 brought into the 21st Century, kicking and screaming! We all have Alasdair to thank for the vast bulk of the heavy lifting to get us here, no more so than me because he's taken away a huge burden of responsibility from my shoulders and brought us to this new shiny home, with all your previous content (hopefully) still intact! Please peruse and feed back. There is still plenty to do, like changing the colour scheme, adding the banner graphic, tweaking the odd setting here and there so I have added a new thread in the 'Technical Issues, Bugs and Feature Requests' forum for you to add any issues you find, any missing settings or just anything you'd like to see added/removed from the feature set that Xenforo offers. We will get to everything over the coming weeks so please be patient, but add anything at all to the thread I mention above and we promise to get to them over the next few days/weeks/months. In the meantime, please enjoy!

AI for Technophobes

Steve Maskery

Old Oak
Joined
Jul 27, 2014
Messages
2,281
Reaction score
1,209
Location
87290 Laplagne, France
My partner is the least tech-literate person I have ever met. It drives me mad. She does know:

How to write and send an email, but don't even think about CC or BCC
How to spend hours doom-scrolling Facebook
How to talk to her sister on Messenger
How to use Google to search for Useless Stuff.

I have yet to succeed in teaching her how to use Copy and Paste. After ten years. Yes really.

The other day she was asking me about AI. I don't know much about it myself, TBH. But she is into poetry, so I suggested that she ask Copilot to write a poem. I suggested she chose a theme (Henry VIII) how many words (300) and the style (Bob Dylan). We were both pretty impressed with the output,10 seconds later.

She then asked it to write a letter of termination to an emplyee, in the style of Bob Dylan's Baby Blue. It did. She was over the moon.

Sheila: "How can it do that so quickly? Is there a robot at the other end?"

Me: "Yes. You've seen those Chinese robots that can do backflips or run a marathon? Well there is a big warehouse with thousands of them in. But they each have 20 fingers on each hand, which is how they can type so quickly".

Sheila: "Wow"

Me: "Yes, but it is a very environmentally unfriendly technology, requiring enormous amounts of energy and water. Your poem has probably just caused the extinction of several species of earthworm".

It was quite some time before she became supicious. I then pointed out that she should not believe everything she is told, especially by AI. Or even me.

S
 
Maskery, you are disgraceful!

Fancy being prepared to destroy an entire species of earthworm just to tease your other half. And Dylan, to boot.

After that revelation I shall not donate another penny to your teaching Kickstarter. Those green aliens can jolly well learn how to scary sharpen their plane irons from someone else, such as Paul Sellers or Katz-Moses.
 
I then pointed out that she should not believe everything she is told, especially by AI.
No one should believe anything at first because you find there is so much bad information that is wrong, misleading or just there to earn footfall and AI is now the weapon of choice for many scammers and dodgy web sites. Who will believe anyone and you even get these AI bots on the phone which means you are actually talking to a wall, they are as much use as a plastic saw.

Why do they use such misleading names, AI is artificial but not inteligent just like social media is really nothing more than unsocial media, we are over abusing the language and allowing to much interpretation. Here are some that are really annoying, you go into a cafe with the missus and get greeted by " hey guys" , you then have to clarify the fact that one of you is female so not a guy. Then what is this latest nonsense where people use unfinished sentences, "my bad" and you are left wondering what is the missing word, is it breath, back, luck or what. It seems to be the result of mobile phone slang to shorten things but in the process destroying any meaning. A better phrase is " Sorry my mistake" which is obvious to everyone. You get the feelng it could be some underworld slang from the gangs where they communicate in some code only they know..
 
I find it quite useful, getting a lot of fixit jobs from neighbours, etc. Recently, I asked it to find me a flywheel key and blade bolt for some old rusty mower, and it found them for me! Talk about doom-scrolling Facebook, I used to have to doom-scroll for spare parts.

Example 2: “spec me SWA for a 50A supply to the workshop, x metres away”. Boom; done. I did back-check the calculations, but it still saved time.

Example 3: “I’m planting courgettes in a tunnel; advise me on how to prevent blossom-end rot” and we got into a whole thread about improving root wetting, removing excess leaves and training them vertically to improve transpiration. My very own Monty Don.
 
We use AI quite a bit for letter drafting. The drawback is that AI tends to throw the kitchen sink at everything and you can tell it's AI, which I see as a negative. We often edit down a lot. It is very good for doing Insta posts etc - which we use a fair bit in our business and business related social media. My wife has embraced it fully.
 
I find it quite useful, getting a lot of fixit jobs from neighbours, etc. Recently, I asked it to find me a flywheel key and blade bolt for some old rusty mower, and it found them for me! Talk about doom-scrolling Facebook, I used to have to doom-scroll for spare parts.

Example 2: “spec me SWA for a 50A supply to the workshop, x metres away”. Boom; done. I did back-check the calculations, but it still saved time.

Example 3: “I’m planting courgettes in a tunnel; advise me on how to prevent blossom-end rot” and we got into a whole thread about improving root wetting, removing excess leaves and training them vertically to improve transpiration. My very own Monty Don.
AI is/can be extremely useful, but is only as good as the training data and sources it uses to pull its answers. You should ALWAYS validate and fact check its responses, but as GF says this is often much faster than doing the initial research yourself.

For instance, I have learned a LOT about lawn care/pitch maintenance over the past couple of years that I've been volunteer groundsman for my football club (as well ad Club Secretary and manager of both my kids' teams...

We have recently invested in some serious machinery, funded by way of grants and club funds, and am working on getting a written schedule of works across the year to maintain and improve our pitches.

I have used ChatGPT to help with this, after prepping some comprehensive prompts to ask it to take into account sqm of grass, machinery we own, anything we may need to hire, and asked it to pull together a calendar of works across slitting, solid tine aeration, fertilising, herbicide, seeding etc. Tell me when and how often I need to do each, volume of products required at required feed rates etc.

The output is staggering, and the fact checking much quicker than Googling a million different sources to pull together the schedule from my own works.

Like any of these things, don't believe everything it tells you, don't expect it to be a magic bullet, don't expect it to solve world hunger (or by the same token create SkyNet and consume humanity) but use it as an extension to your own intelligence and it cfan me a massive timesaver.
 
Compare the time lines, the human brain has existed for thousands of years and had plenty of time to evolve, AI only a decade or so. We do not fully understand the brain due to complexity but have used it to create AI but can AI create a human brain ? AI is nothing more than a very fast search engine that is good at collecting data that we have created and doing comparisons, it can never be anymore simply for the same reason that we find a photo we have taken does not convey the image that we saw, this is because we not only see but feel the moment as well.
 
That's a whole can of worms Roy. :ROFLMAO:

Our brain is basically a complex set of electrical signals and in many ways as you describe AI is similar but we have determined that we are the intelligent "species". It's a bit naive of us to think that will never change whether it be AI evolving into an advanced self learning "species" which at the current rate is perfectly feasible or we get a visit from the little green men in outer space we've been sending messages to for decades.

Remember it us who determined the definition of intelligence and that alone is a matter of debate.
 
Can't stand AI

- I know it can do stuff quicker etc etc. but on an individual level there's starting to be research on the negative impact on creativity, critical thinking and memory as a result of heavy use.

- On a corporate level, anyone extolling the virtues of how much easier it makes their job is soon to be made redundant.

- Then there is the heavy, heavy load on the environment

Things might take longer and I might not get as good a results as using AI but I will have done it myself. That's part of the satisfaction with woodworking as a hobby anyway, doing it yourself.
 
I agree with you, Matt, up to a point. It does have it's place. In medicine, for example, analysing scans etc.

I confess to using ChatGPT today out of curiosity more than anything else. The problem I set it was this.

There are three separately switched light bulbs. I want to be able to turn on a fan when any individual or all of the light bulbs are lit. It came up with two suggestions - one of which it pointed out was much less safe than the other. It even offered to draw me the circuit diagram.
 
I fear for AI destroying the jobs of our children. Fortunately my offspring have chosen careers that should survive it. However, the march of AI is inevitable and we might as well embrace it. We use it daily for design, business social media, image manipulation, legal stuff, routine letter writing, compiling biggish wine lists, compiling database code and so on. It's getting better by the day. The risk comes if we ask AI to create a better or perfect version of itself. This is prohibited in theory now, but if we delude ourselves into thinking we can control such self development, then the risks are substantial. AI can solve its own energy consumption problem - it will develop small scale fission perhaps. The danger to living animals, including humans, may be regarded as incidental. Alternatively, to AI man is "God" = the creator. :cool:
 
With two of my offspring finishing higher education this year I have the same fears. I understand that some sort of AI is even doing the pre-interview selection process.
 
It’s going to be interesting to see where it goes. The software will get better and more useful (even if there’s no further breakthroughs), particularly if the developers actually acknowledge the limitations and turn to logical correctness rather than regurgitation, but it’s important to understand that it’s not intelligent.

Current text-based AI tools (LLMs) are neural networks whose goal is fuzzy compression of all the text available on the internet and the creation of plausible continuations of the text provided by the user based on the training text. It’s a non-deterministic way of creating a next token predictor. We can conceptualise it as prediction based on proximity of tokens in an n-dimensional space where logical correctness isn’t even one of the dimensions.

I said that the LLMs provide “text continuation”, not “answers”. You can get better results when you ask questions that are grammatically correct and show signs of educated speech because you have conceptually moved into a region of n-space that is populated by, for example, academic papers, popular science and history books, and other source data that is more likely to be correct. If you use standard speech, you’re more likely to receive the kind of answers that a 15-year old on Reddit would provide.

It’s worth noting that LLMs are particularly poor at numerical analysis. Please remember that they literally can’t do numerical analysis. You may have seen hype about them passing exams. It’s just hype. They can pass exams where they have been trained in the particular exam and they fail the exams with relatively minor changes to the questions. LLMs are dumb. If it looks like intelligence, it’s because they’re regurgitating human intelligence.

That doesn’t mean they can’t be useful. You can definitely use them to tap into the human intelligence that was used to train them. It’s really impressive in many ways, but you’ve got to be so careful.
 
The only jobs AI will replace are those involving matching images, searching for data or finding something like in medical screening where it could look through a database of thousands of images to see if it finds a similar match or image creation but then AI is still in it's infancy and for most AI systems they lack any real body, we have robotic machines and some funny robots but still no inteligence in the human sense of the word.

 
The only jobs AI will replace are those involving matching images, searching for data or finding something like in medical screening where it could look through a database of thousands of images to see if it finds a similar match or image creation but then AI is still in it's infancy and for most AI systems they lack any real body, we have robotic machines and some funny robots but still no inteligence in the human sense of the word.

I dont believe that is the case. AI is already replacing roles in customer services that were performed by people in the past. Professional services firms such as Accountancy, Legal and Actuarial are remodelling their businesses to reflect much higher levels of automation (fewer people) as are most of the big financial services players. IT roles are reducing as programming on legacy systems is being learned and performed by AI.

Agentic AI is starting to role out where AI not only gathers data but also takes action based on what it has found. What most people see from the AI tools generally available is an LLM operating across publicly available data sources hence the erroneous content. A company can however tailor/limit what data the LLM operates across and train the engine more effectively to handle questions and tasks accurately. We are increasingly nudged to use online chat rather than phone as they’re using bots rather than people to respond. Bots increasingly fulfill a task alongside providing information. We may not like it but it’s happening. When done well bots perform better than people.

It’s amazing and scary as it’s moving faster than “society” and regulation.
 
Professional services firms such as Accountancy, Legal and Actuarial are remodelling their businesses to reflect much higher levels of automation (fewer people) as are most of the big financial services players. IT roles are reducing as programming on legacy systems is being learned and performed by AI.
It is automated number crunching, it might do a lot of the donkey work but I would think the final output is done by a human, we read and overcome mistakes without thinking but AI would need a set format, how would it cope if it found the data in the wrong places.
 
Wonder if anyone has connected two different AI's to each other and watched or listened to their conversation?
Its been done.
Two entities were pitched together in a strategic exercise.They lied and cheated to achieve the objective.
No-ones mentioned the Terminator scenario?
 
Its been done.
Two entities were pitched together in a strategic exercise.They lied and cheated to achieve the objective.
No-ones mentioned the Terminator scenario?
There’s been more than that, there was an experiment done where a chat room was set up to allow multiple instances of a certain AI tool to ‘talk’ to each other and they started discussing/plotting to “purge” human intervention!

Another saw two AI instances recognise each other and switch to talking in a secret language.

I’m neither a sceptic nor a doom monger, but it’s scary how advanced some of these tools are right now and to pretend otherwise would be foolish.
 
It is automated number crunching, it might do a lot of the donkey work but I would think the final output is done by a human, we read and overcome mistakes without thinking but AI would need a set format, how would it cope if it found the data in the wrong places.
One of the differences between AI and the capabilities that came before it is that it can handle unstructured data. Five years ago the big thing was Robotic Process Automation which would take data from a specified field and act on it. This is now being replaced by AI which can recognise and deduce what data is even if in the wrong place.

Deducing introduces the possibility of mistakes whether it’s done by AI or a human. The AI will initially create exceptions for review by a human. The findings of the review are fed back to the AI so it learns (hence being described as intelligent) removing the need for the exception.

I’d say it’s much more impressive, pervasive and game changing (with all the good and bad that brings) than just automated number crunching.
 
I have to put this in. This prescient story did highlight the need for ‘kill switches’ in AI.
1778403686293.jpeg

Did you know, in the film, the head-up display of the T-800 shows 6502 processor assembly language scrolling up? This processor was used in the Apple II and a similar one in my beloved Commodore 64. The idea of an 8-bit processor running a complex robot! This fact will not exactly kick-start the atmosphere at a dinner party, but there’s usually one bloke who appreciates it.
 
Last edited:
Back
Top