Is AI Actually Useful?

preview_player
Показать описание

A new Harvard Business School study analyzed the impact of giving AI tools, to white collar workers at Boston Consulting Group.

In the study, management consultants who were told to use Chat GPT when carrying out a set of consulting tasks were far more productive than their colleagues who were not given access the tool. Not only did AI-assisted consultants carry out tasks 25 per cent faster and complete 12 per cent more tasks overall, but their work was also assessed to be 40 per cent higher in quality than their unassisted peers.

In today's video we look at the pros and cons of using AI at work.

Patrick's Books:

Ways To Support The Channel

Patrick Boyle On Finance Podcast:

Join this channel to support making this content:
Рекомендации по теме
Комментарии
Автор

You can tell this channel is 100% powered by AI, the SIM presenter never blinks.

antoinepageau
Автор

My experience matches the study. I am a senior software engineer, so I write a lot of software and write a lot of documentation about that software. Usually the AI generated code only represents the common use case, which is little better than what I could get out of the typical documentation for an open source project, for example. It's a little bit easier to use the AI as a search engine, but if the problem goes outside of the boundaries of the most common use case even a little bit, I'm on my own. Basically, it quickly writes code that I would have just copied and pasted from an example anyway, but I still have to do the hard parts myself.

It's a little better for generating documentation, but typically I have to do so many edits to fix errors or get the focus or tone right, that the time savings shrinks dramatically.

I'm a little worried that AI is generally only going to give a best case 50% productivity boost to most, while the market seems to be assuming a productivity revolution... I'm worried for my 401k, I mean.

luckylanno
Автор

My favourite quote regarding Large Language Models: “The I in LLM stands for Intelligence.”

germansnowman
Автор

I play with AI in SEO. What’s interesting is that due to the error rate and generic content it’s actually quicker to write by hand instead of having AI do it and then fix all the mistakes it does.

jalliartturi
Автор

I recently wasted two days trying to diagnose and address a bug in my script thanks to the over-reliance on AI. At the end, I gave up and decided to consult the documentation of the API and scroll through user forums. I immediately got all the answers I needed. I will never think of these AI tools the same ever again.

caty
Автор

I work in software marketing, and quite frankly, as excited as everyone was about the new developments in the beginning, we've essentially stopped using any "AI" or LLMs. For coding, all an LLM can give you is basically the most common code you'd find in any library anyway, and if I ask it to code something longer or more complex, LLMs tend to cause more problems then they solve: because they don't actually "understand", they have no concept of contingency or continuity, so for example they switch how they refer to a specific variable mid-code. Ultimately, for the 30min it saves me in coding from scratch or just looking it up in our documentation, I spend 50min bugfixing the LLM code. Same with user documentation - the LLM texts have no concept of terminological consistency so they keep adding synonyms to terms that have a fixed definition in our company terminology etc. And for the marketing part of it, you'd think LLMs are useful for generating those generic fluff texts you need just to fill a website, but because the output of an LLM is - per definition - the most common sequence of sentences and paragraphs from the data set the LLM was trained on, you end up with marketing fluff that is so incredibly boring, bland and lacking in any uniqueness, that it's not even useful for fluff text. The only use we've found for it so far is automated e-mail replies - which we previously handled via a run-of-the-mill productivity tool.

KathyClysm
Автор

Tldr: Ai helps with things it's good at and hurts with things it is bad at. The problem is that it isn't really clear what ai is good or bad at.

Zachary-Daiquiri
Автор

I'm not knocking AI, I use it quite a bit. But my general feeling (based on how I use it) is that all it's done (doing) is quickly web scraped the top 10 Google pages and summarized them for me. Like I said, in my field that is very, very handy and saves me time because I don't need to skim pages and blogs to find answers. On the other hand, it's never offered an answer or solution that would make me think its done anything remotely original.

devonglide
Автор

I asked ChatGPT some basic engineering questions I can safely say that ChatGPT is a very knowledgable first year engineering student, at best.

The problem I think is that the bulk of engineering knowledge is still found in esoteric textbooks, engineering standards behind paywalls and word of mouth between engineers. It also doesn't help that engineering documentation is often company secrets for obvious reasons.

robincray
Автор

I appreciate the boldness of consulting firms offering predictions on the future of a nascent technology, as though they have any more insight than we do.

lelik
Автор

The first serious video I see on the topic. So much better than all these sales bros going “AI is going to change the world within the next 2 years. Hire me and I tell you how”

SH-lyuy
Автор

As a scientist, generative AI is (at the moment) very limited in its usefulness. Because it doesn't really 'understand' novel situations, it isn't helpful at planning experiments or studies. The most useful area is in summarizing reports or helping with writing. But even there you have to be careful that the AI isn't missing the major thrust of papers or publications (as it can often fixate on certain things, or misinterpret them).

Non-generative machine learning has been a tool used for years, though. We use it pretty routinely to help correct for errors in sequencing, for instance, and for assessing the accuracy of variant calls in genetics. I'm of the same belief that, while a useful tool, it is one of a dozen tools in a worker's toolbox -- it doesn't replace the worker.

wbmc
Автор

I work in a manufacturing shop and I've used AI to quickly create code to complete certain tasks. We don't have any developers on site obviously and some of our coding needs are fairly simple. AI has allowed me to create simple programs to complete a repetitive task without needing a programmer.

kulls
Автор

Software engineer using LLMs to write code is like having a junior that you have to constantly go back and tell no thats not the solution. Its good at l33t code though because their are soooo many already done solutions of it on github that have been used to train the model. Where it completely sucks is on specialised enterprise code. So my jobs feel safe for a good while. Until an LLM actually learns logic and reasoning im not worried at all

jasonosunkoya
Автор

Just a side note: commercially available freelance art projects are starting to become harder to find.
Illustrators, concept artists, and background artists are losing a lot of paying work in my experience.

ricks
Автор

Hard not to notice that the definition of a high end consultancy job requiring top students from elite universities is "Come up with an idea for a drink", and "Come up with an idea for a shoe". Yet the people who have the actual technical knowledge to make the drink, or build the shoe don't get a look in. If we compared the respective salaries, I'm willing to bet that the Apprentice extras will be earning double or more that of the people who actually do the work. So when we hear that these corporate experts might be put out of a job by AI... my sympathy is strangely absent.

LockFarm
Автор

The skill moat you mention at the end is my gravest concern. I've been at the job I have been for 13 years, and it's my expertise I gained over those years that make me a value to my employer.

Now with outsourcing tasks, either overseas via remote work, or though AI to do the small and annoying things, you can't learn how the system works to try and push through the annoying things more quickly. This is how humans learn efficiency, and perhaps new methods not thought of by the prior generation.

Over the past few years, I feel like my workplace is falling backwards more than forwards. I can't fully work with the people I'm supposed to delegate to due to the time zone difference. So it means if they don't get to an urgent task, I have to do it.

Lately I'm feeling this "AI" thing is literally a salesperson selling a bag of beans hoping for some Deus Ex Machina to save us from our grudging tasks. And to sell the customers 'a solution'. But in the end the "AI" is merely office workers analyzing data with grueling deadlines, not unlike the wizard of oz just being a man behind a curtain.

The humans will do the work, but the machine will get the credit.

WorldinRooView
Автор

As a software engineer, i use copilot to assist in making software. I do find it helpful but as it stands, I cannot trust it to write good software. I generally find it's answers wrong about 35% of the time when asking more complex questions which is when I am asking it in the first place. A feature I do really like about copilot is that it sees other code files I am looking at and offers helpful suggestions for the next line I am writing. Right now I don't feel like my job is threatened by ai but who knows about the future...

guyswartwood
Автор

I'm a medical translator and because I'm a fast typer I prefer translating from scratch to post-editing machine translations.

Sometimes they are frighteningly smart, but it's a bit like the world's smartest two year-old. You can't rely on it, especially for sensitive documents where you need humans in the loop.

Tudor_Rusan
Автор

How do I square the results of this study with the fact that consulting firms don't actually offer real value to firms contracting them regardless of AI usage?

Ringofire