- cross-posted to:
- hackernews@lemmy.bestiver.se
- cross-posted to:
- hackernews@lemmy.bestiver.se
Let me ask chatgpt what I think about this
Quickly, ask AI how to improve or practice critical thinking skills!
Chat GPT et al; “To improve your critical thinking skills you should rely completely on AI.”
That sounds right. Lemme ask Gemini and DeepSink just in case.
“Deepsink” lmao sounds like some sink cleaner brand
Improving your critical thinking skills is a process that involves learning new techniques, practicing them regularly, and reflecting on your thought processes. Here’s a comprehensive approach:
1. Build a Foundation in Logic and Reasoning
• Study basic logic: Familiarize yourself with formal and informal logic (e.g., learning about common fallacies, syllogisms, and deductive vs. inductive reasoning). This forms the groundwork for assessing arguments objectively.
• Learn structured methods: Books and online courses on critical thinking (such as Lewis Vaughn’s texts) provide a systematic introduction to these concepts.
2. Practice Socratic Questioning
• Ask open-ended questions: Challenge assumptions by repeatedly asking “why†and “how†to uncover underlying beliefs and evidence.
• Reflect on responses: This method helps you clarify your own reasoning and discover alternative viewpoints.
3. Engage in Reflective Practice
• Keep a journal: Write about decisions, problems, or debates you’ve had. Reflect on what went well, where you might have been biased, and what could be improved.
• Use structured reflection models: Approaches like Gibbs’ reflective cycle guide you through describing an experience, analyzing it, and planning improvements.
4. Use Structured Frameworks
• Follow multi-step processes: For example, the Asana article “How to build your critical thinking skills in 7 steps†suggests: identify the problem, gather information, analyze data, consider alternatives, draw conclusions, communicate solutions, and then reflect on the process.
• Experiment with frameworks like Six Thinking Hats: This method helps you view issues from different angles (facts, emotions, positives, negatives, creativity, and process control) by “wearing†a different metaphorical hat for each perspective.
5. Read Widely and Critically
• Expose yourself to diverse perspectives: Reading quality journalism (e.g., The Economist, FT) or academic articles forces you to analyze arguments, recognize biases, and evaluate evidence.
• Practice lateral reading: Verify information by consulting multiple sources and questioning the credibility of each.
6. Participate in Discussions and Debates
• Engage with peers: Whether through formal debates, classroom discussions, or online forums, articulating your views and defending them against criticism deepens your reasoning.
• Embrace feedback: Learn to view criticism as an opportunity to refine your thought process rather than a personal attack.
7. Apply Critical Thinking to Real-World Problems
• Experiment in everyday scenarios: Use critical thinking when making decisions—such as planning your day, solving work problems, or evaluating news stories.
• Practice with “what-if†scenarios: This helps build your ability to foresee consequences and assess risks (as noted by Harvard Business’s discussion on avoiding the urgency trap).
8. Develop a Habit of Continuous Learning
• Set aside regular “mental workout†time: Like scheduled exercise, devote time to tackling complex questions without distractions.
• Reflect on your biases and update your beliefs: Over time, becoming aware of and adjusting for your cognitive biases will improve your judgment.
By integrating these strategies into your daily routine, you can gradually sharpen your critical thinking abilities. Remember, the key is consistency and the willingness to challenge your own assumptions continually.
Happy thinking!
I find this very offensive, wait until my chatgpt hears about this! It will have a witty comeback for you just you watch!
Also your ability to search information on the web. Most people I’ve seen got no idea how to use a damn browser or how to search effectively, ai is gonna fuck that ability completely
Gen Zs are TERRIBLE at searching things online in my experience. I’m a sweet spot millennial, born close to the middle in 1987. Man oh man watching the 22 year olds who work for me try to google things hurts my brain.
To be fair, the web has become flooded with AI slop. Search engines have never been more useless. I’ve started using kagi and I’m trying to be more intentional about it but after a bit of searching it’s often easier to just ask claude
Sounds a bit bogus to call this a causation. Much more likely that people who are more gullible in general also believe AI whatever it says.
Counterpoint - if you must rely on AI, you have to constantly exercise your critical thinking skills to parse through all its bullshit, or AI will eventually Darwin your ass when it tells you that bleach and ammonia make a lemon cleanser to die for.
Corporations and politicians: “oh great news everyone… It worked. Time to kick off phase 2…”
- Replace all the water trump wasted in California with brawndo
- Sell mortgages for eggs, but call them patriot pods
- Welcome to Costco, I love you
- All medicine replaced with raw milk enemas
- Handjobs at Starbucks
- Ow my balls, Tuesdays this fall on CBS
- Chocolate rations have gone up from 10 to 6
- All government vehicles are cybertrucks
- trump nft cartoons on all USD, incest legal, Ivanka new first lady.
- Public executions on pay per view, lowered into deep fried turkey fryer on white house lawn, your meat is then mixed in with the other mechanically separated protein on the Tyson foods processing line (run exclusively by 3rd graders) and packaged without distinction on label.
- FDA doesn’t inspect food or drugs. Everything approved and officially change acronym to F(uck You) D(umb) A(ss)
I’ve only used it to write cover letters for me. I tried to also use it to write some code but it would just cycle through the same 5 wrong solutions it could think of, telling me “I’ve fixed the problem now”
The one thing that I learned when talking to chatGPT or any other AI on a technical subject is you have to ask the AI to cite its sources. Because AIs can absolutely bullshit without knowing it, and asking for the sources is critical to double checking.
Microsoft LLM whatever the name is gives sources, or at least it did to me yesterday.
I’ve found questions about niche tools tend to get worse answers. I was asking if some stuff about jpackage and it couldn’t give me any working suggestions or correct information. Stuff I’ve asked about Docker was much better.
I consider myself very average, and all my average interactions with AI have been abysmal failures that are hilariously wrong. I invested time and money into trying various models to help me with data analysis work, and they can’t even do basic math or summaries of a PDF and the data contained within.
I was impressed with how good the things are at interpreting human fiction, jokes, writing and feelings. Which is really weird, in the context of our perceptions of what AI will be like, it’s the exact opposite. The first AI’s aren’t emotionless robots, they’re whiny, inaccurate, delusional and unpredictable bitches. That alone is worth the price of admission for the humor and silliness of it all, but certainly not worth upending society over, it’s still just a huge novelty.
It makes HAL 9000 from 2001: A Space Odyessy seem realistic. In the movie he is a highly technical AI but doesn’t understand the implications of what he wants to do. He sees Dave as a detriment to the mission and it can be better accomplished without him… not stopping to think about the implications of what he is doing.
I mean, leave it up the one of the greatest creative minds of all time to predict that our AI will be unpredictable and emotional. The man invented the communication satellite and wrote franchises that are still being lined up to make into major hollywood releases half a century later.
It’s going to remove all individuality and turn us into a homogeneous jelly-like society. We all think exactly the same since AI “smoothes out” the edges of extreme thinking.
Copilot told me you’re wrong and that I can’t play with you anymore.
Vs text books? What’s the difference?
The variety of available text books, reviewed for use by educators vs autocratic loving tech bros pushing black box solutions to the masses.
Just off thebtopnofnmy head.
Tech Bros aren’t really reviewing it individually.
I felt it happen realtime everytime, I still use it for questions but ik im about to not be able to think crtically for the rest of the day, its a last resort if I cant find any info online or any response from discords/forums
Its still useful for coding imo, I still have to think critically, it just fills some tedious stuff in.
It was hella useful for research in college and it made me think more because it kept giving me useful sources and telling me the context and where to find it, i still did the work and it actually took longer because I wouldnt commit to topics or keep adding more information. Just dont have it spit out your essay, it sucks at that, have it spit out topics and info on those topics with sources, then use that to build your work.
Google used to be good, but this is far superior, I used bings chatgpt when I was in school idk whats good now (it only gave a paragraph max and included sources for each sentence)
How did you manage to actually use bing gpt? I’ve tried like 20 times and it’s wrong the majority of the time
It worked for school stuff well, I always added "prioritize factual sources with .edu " or something like that. Specify that it is for a research paper and tell it to look for stuff how you would.
Only time I told it to be factual was looking at 4k laptops, it gave me 5 laptops, 4 marked as 4k, 0 of the 5 were actually 4k.
That was last year though so maybe it’s improved by now
I wouldnt use it on current info like that only scraped data, like using it on history classes itll be useful, using it for sales right now definitely not
Ive also tried using it for old games but at the time it said wailord was the heaviest Pokemon (the blimp whale in fact does not weigh more than the sky scraper).
Idk man. I just used it the other day for recalling some regex syntax and it was a bit helpful. However, if you use it to help you generate the regex prompt, it won’t do that successfully. However, it can break down the regex and explain it to you.
Ofc you all can say “just read the damn manual”, sure I could do that too, but asking an generative a.i to explain a script can also be as effective.
Hey, just letting you know getting the answers you want after getting a whole lot of answers you dont want is pretty much how everyone learns.
People generally don’t learn from an unreliable teacher.
I’d rather learn from slightly unreliable teachers than teachers who belittle me for asking questions.
No, obviously not. You don’t actually learn if you get misinformation, it’s actually the opposite of learning.
But thankfully you don’t have to chose between those two options.
Literally everyone learns from unreliable teachers, the question is just how reliable.
You are being unnecessarily pedantic. “A person can be wrong therefore I will get my information from a random words generator” is exactly the attitude we need to avoid.
A teacher can be mistaken, yes. But when they start lying on purpose, they stop being a teacher. When they don’t know the difference between the truth and a lie, they never were.
yes, exactly. You lose your critical thinking skills
As I was learning regex I was wondering why the * doesn’t act like a wildcard and why I had to use .* instead. That doesn’t make me lose my critical thinking skills. That was wondering what’s wrong with the way I’m using this character.
You can either use AI to just vomit dubious information at you or you can use it as a tool to do stuff. The more specific the task, the better LLMs work. When I use LLMs for highly specific coding tasks that I couldn’t do otherwise (I’m not a [good] coder), it does not make me worse at critical thinking.
I actually understand programming much better because of LLMs. I have to debug their code, do research so I know how to prompt it best to get what I want, do research into programming and software design principles, etc.
Tinfoil hat me goes straight to: make the population dumber and they’re easier to manipulate.
It’s insane how people take LLM output as gospel. It’s a TOOL just like every other piece of technology.
I mostly use it for wordy things like filing out review forms HR make us do and writing templates for messages to customers
Exactly. It’s great for that, as long as you know what you want it to say and can verify it.
The issue is people who don’t critically think about the data they get from it, who I assume are the same type to forward Facebook memes as fact.
It’s a larger problem, where convenience takes priority over actually learning and understanding something yourself.