Why should I learn AI? It seems like learning real computer science or programming would make more sense.
Posted by hireme-plz@reddit | learnprogramming | View on Reddit | 69 comments
I really don't like it when they say, "You need to learn AI or you'll fall behind.". IMO, It seems like learning AI is nothing more than just typing what you think, which is essentially the same as writing anything else.
Take this MCP, for example. The AI influencers were acting like it’s a gift from the gods that allows agent to talk to your computer. In reality? It’s just JSON-RPC a protocol from the early 2000s, wrapped in a trendy name. We’ve had plugin architectures and middleware for decades. Telling an AI what a tool does in natural language is just a fancy way of writing a documentation file that we used to call a Readme.
Some people might say I'm a Luddite. But this is what I think, and I want to hear what other people think.
Beregolas@reddit
You don't need to learn AI... it's really not hard to use, and to be blunt: I think most people who really highligh how great AI is and how much you need to learn it are not terribly smart.
Get good at programming and CS; Those are still the real skills. If you know how to program without AI, you can pretty esily include AI in tha eorkflow later. You know what to ask for, how to structure code etc.
hireme-plz@reddit (OP)
Exactly! I know, right? I’ve been feeling like I’m falling behind lately because of AI hypers.
HexspaReloaded@reddit
I’m not an AI hyper or a programmer, but I encourage you to use whatever AI or machine learning tools you can. LLMs have helped me learn Linux/bash, customize Home Assistant with YAML, create a coherent business strategy, build a woodworking project, and diagnose an anaphylactic event. Ymmv
Immediate_Form7831@reddit
I love that I don't need to learn it. I don't need to be exact in my instructions. It does a lot of things much better than me: writing syntax, doing boring mechanical changes spread over a lot of files, finding information in noisy log files, digging through git history, writing good PR descriptions. It is even suprisingly good at cleaning up the git history of a messy feature branch.
I can focus on high-level decisions, architecture, and all those things that you need an actual brain to do.
zzrryll@reddit
It generally provides room temperature level IQ responses. So you’re correct if your IQ is at that level or below, the responses seem amazing.
For the rest of us, it just seems like a half assed under informed take, that drops most of the nuance or subtlety.
Beregolas@reddit
I mean, it's obviously not just about IQ. AI is incredibly good at formulatung responses so that they seem smart or correct. I catch myself falling into this trap sometimes when I ask it about something I don't know that much about, when I check from time to time how AI is working right now.
It's ensnaring. If I didn't know how confidently half correct or outright wrong it often is with coding, I would definitely fall for that too.
zzrryll@reddit
I don’t agree with that take.
nomoreplsthx@reddit
One of the problems with AI assisted coding is that the hype is so absurd, it's masking a genuinely revolutionary change.
AI assisted development is faster, much faster. Anectdotally I would say somewhere between 50% and 2x faster. But it is also much much slower than the hype train implies.
This id an infuriating place to be, because the hypers are dominating disourse and swallowing any ability to discuss what actually works.
aqua_regis@reddit
There are several scientific studies that contradict that observation. In fact, one of the most well known one from last year observed a near 20% decline in productivity while the perception of the programmers was a 30% increase.
Yes, AI can ship faster, but the phase of debugging can be considerably longer. The technical debt because of sloppy AI code accumulates. Plus, nobody knows and understands the code base anymore and thus, debugging becomes a "hunt and peck" endeavor.
SpaceNacho@reddit
I’m not saying this is or isn’t true either way, but a study on this from a year ago is frankly stale at this point. Entirely different from how this stuff works now. Could still be accurate but with how much has changed since then, I would pretty skeptical until I can see a more up-to-date study.
HippieInDisguise2_0@reddit
Claude around November of last year feels radically better at solving problems than the tools I was using prior.
I've been a holdout against since ChatGPT made waves a couple years ago. But where it's at now is definitely useful.
Honestly I've been grappling emotionally with how good it's gotten. I've been against it hoping it'd take longer for it to be a true game changer but I think we're either there now or going to be there imminently.
Biliunas@reddit
I read a comment like this and wonder - is this just some ai bro burner account posting shit like this? Nothing feels “radically better” for me with any model, it’s the same janky output with sometimes brilliant small ideas that never translate without a human present. I can’t be doing something so differently that the results are night and day right?
For example I’d say Claude was the best or at least similar to what we have now from the beginning, just they tried bolting on more and more context, with various results. Together with the ridiculous cost and still a tiny context, it’s worse to use it than it was before.
And a final rant, i think the golden age was when open code just launched and you could just use any model pretty cheaply. It was easy to test models fast, not like now where they intentionally gimp the competitor models.
HippieInDisguise2_0@reddit
You can check my post history man.
I've been on this reddit account for about 10 years and post a ton in this sub in that span.
I'm no AI bro.
SpaceNacho@reddit
I get where you’re coming from and I don’t distrust your own experience. But I do think the distinction is a bit different than where you’ve landed with it. It’s less sycophancy or trend-worshipping or whatever else and more just…. being where the money is.
A lot of your complaints are real for individuals and smaller orgs. But I work at an organization that pays for a decent sized enterprise subscription and as an engineer (as opposed to leadership), we have access more flexibility and options when it comes to our workflows and it’s clear Enterprise is where companies like Anthropic are putting the vast majority of their focus more and more.
Not saying it’s a good thing but just the reality of why there appears to be such dramatic differences in perspectives on where it’s all at.
Stellariser@reddit
Yes, a lot of people think they’re faster but that’s because they ignore all the time they spend futzing with the AI.
nomoreplsthx@reddit
Do you think we aren't measuring that? Did you think I seriously opined publicly on a topic by licking my finger and holding it up to the wind and going 'feels faster'? What sort of moron does that?
nomoreplsthx@reddit
One of the issues with studies here is that the technology is moving far faster than academia can, which is why I am basing my assessment on internal data currently (when I say anectdotally, I mean 'based on i ternal data with small dample sizes, not based on my gut) even if that data is probably a bit less rigorous.
It's also worth noting that the scientific data is pretty nuanced. There's a wide range even in the studies that do exist. The decline study is an outlier on thr low end
The error bars are huge, and I would believe the real nunber is anywhere between 0 and 100% gain.
reeblebeeble@reddit
Short term gains which average out over the long term.
I think there's also a loss of friction effect, where people are more excited to do more stuff because the barriers are lower, but they end up not evaluating as much in the planning phase because the costs are lower, and they make a lot of junk that no one really needs. That may seem unimportant but I think the time wasting adds up
steerpike1971@reddit
That well known study was not actually published anywhere. It did receive a lot of hype though because so many people want to believe it is true.
Hairless_Gash@reddit
The study I read which I find plausible is the productivity is a function of your current skill.
Where highly skilled are gainfully using ai with increased productivity and the inexperienced can often get pulled off onto tangent more easily.
Not sure how I stumbled upon this sub but as a sw eng of 27 years I find the tools very very gainful.
There are number of gurus Ive followed for many years ago also work this way.
When you are at the point the rolling is utilized well, which is a skill in and of itself. I believe you will find that true as well.
That said I've recently retired early and now programmers for fun. I, more often, use ai much less than I might before. As I understand it can really remove the enjoyment from coding.
But the productivity increase is already real in my experience, and will be more real for others as the tooling matures.
I'm already convinced it is true that anyone not using ai will be uncompetitive, but if done well using it will become more seamless and new programmers won't even feel it's a burden.
But I will say one thing if you struggling to pick it up now it might indicate you do not have the capacity to be competitive going forward.
Biliunas@reddit
Yeah, and I mean it's crazy that we're just lumping it all in one big pot.
A portfolio website or a component? That's like 10x improvement.
A fully distributed modular complex system? You gonna have to design this one yourself dawg.
TinyLebowski@reddit
I agree but with a caveat. AI assistance really shines in smallish projects with clear boundaries. On large projects with legacy code, it can be much slower than doing it manually (assuming you know the code base). It's good at implementing new features, but not refactoring code. It usually has to grep the entire code base which fills up the context window very fast. I hope this will be mitigated by IDE MCPs with tools for finding usages.
gordonnowak@reddit
this is the first truth-speaking on the subject I've seen on this sub
TheArchist@reddit
people really suck off llms do they huh. i personally think they are useless to get actual work done with unless you hold it at gunpoint and force it to give you what you need and not bullshit. not using ai means you'll learn more effectively because you will be dealing with your mistakes directly. i see no point in relying on llms when learning, it's a terrible idea.
i highly recommend learning computer science but realize that you won't get the full benefit unless you learn the mathematics required. difficult, but it will boost your fundamentals like nothing else.
Shama_lala@reddit
sometimes it does feel like things are being hyped up like they’re completely new when it’s just older ideas repackaged a bit differently but at the same time I feel like the learn AI or fall behind thing isn’t really about the tech itself, more like getting used to how people are starting to use it
Immediate_Form7831@reddit
You could have made the same case for "learning compilers" in the 50s. AI is a tool which is to your disposal as a software engineer. You can make use of it or not. For my part it makes me more productive; I can write better code faster by using it.
MCPs are awesome. I can ask Claude "check the status of our production systems", and it can quickly look up realtime info from a number of log files (which I've taught it using skills), and correlate any issues by correlating data with our ticketing system, all using MCPs to fetch data from different systems. Using this information it can tell me actual information about the system in minutes which would take me an hour or two to compile. And if there is a problem I can simply ask it to "explain this crash" or "tell me why this api call failed".
I have more than once had Claude Code find long-standing bugs in our systems caused by weird edge-cases which I would have had to spend weeks of dedicated focused time on. Claude found several of them in 30 minutes.
It sounds to me like you have read about AI tools, MCPs, etc, but never really used them to do real work.
Gugalcrom123@reddit
Programming languages are just a convenient notation for machine code. LLMs are not, they are non-deterministic.
Immediate_Form7831@reddit
"Just a convenient notation", I do not agree at all. "Convenient notation" is to be able to use string literals instead of writing an array of integers. Compilers and LLMs provide much more value than that.
Also LLMs being non-deterministic is not very relevant, but of course it depends on how you use it. I would never blindly trust the result of an LLM.
_N-iX_@reddit
Learning “AI” doesn’t have to mean replacing computer science fundamentals. A strong foundation in CS actually makes AI tools more useful, not less. Without that foundation, you can generate things - but it’s harder to evaluate, debug, or scale them.
vardonir@reddit
"You need to learn to use a camera, or you'll fall behind." I'm sure someone has said that to a portrait artist at some point.
Different tool for the same job.
Gugalcrom123@reddit
What do you need to learn specifically? To write what you want in English? I thought anyone can do that.
TheCableGui@reddit
AI can’t create anything that hasn’t been already conceptualized.
1up_muffin@reddit
Being "good at ai" is having common sense and enough tech skills to know when something is wrong, you'd be shocked at how many people are lacking even these skills in the work force.
Fundamentals of programming are super important.
AlSweigart@reddit
thats_bait.gif
unohdin-nimeni@reddit
What does ”learning AI” even mean? Read PAIP, then AIMA. Don’t pay attention to the so called “AI” trends of today.
AlSweigart@reddit
A = Artificial
I = Intelligence
And that concludes our intensive three week course.
Vandrel@reddit
It's not one or the other, AI is a tool. The better you are at development the better you'll be able to leverage AI. There is a bit of a learning curve in writing prompts well, it's not that hard to do but someone who knows what they're doing with it will tend to get better results.
patternrelay@reddit
I get your point, a lot of it is repackaging old ideas. But the shift is in abstraction level and behavior, not just protocols. Understanding AI helps you reason about where those abstractions break, which is becoming part of real engineering work.
FlashyResist5@reddit
Don't listen to random AI influencers, listen to me instead. I will tell you anything you want to hear for only $9.99 a month.
dafugiswrongwithyou@reddit
Hi, hello,
Your instinct is right, learning the fundamentals is much, much more important, and you should focus on that. A lot of the "AI" push is from people who are either invested in people believing, or have listened to those invested people and so believe, that:
1) LLMs are good enough now and will remain good enough, and/or going to get much better soon, 2) The costs to use these services is going to remain competitive, and 3) They're very easy to use and get good results out of.
I'm not convinced any of these are true.
Stories abound of people building unmaintainable spaghetti codebases because their chatbot doesn't do a good job of integrated new code with the old or using consistent methods (Amazon recently had some high-profile issues with their site caused by slop code making it into the system), or trying to use it as an AGI and then finding, whoops, it deleted their whole system, including the backups. Meanwhile, every week there's a new study showing that, no, the current crop of LLMs cannot and will not creep their way up to AGI status. Not good enough, not going to get good enough.
And the price? Right now, these systems are propped up by venture capital, the certainty this tech is cool enough that, if they give them money now, enough of us will pay to make it back with interest later. (Pssst: these are those "invested people" I mentioned.) At some point they need to start paying their own bills, and that means charging users more than it costs to run these things. Hey, fun homework; go check out what the Nvidia CEO reckons their engineers should be spending on tokens for LLMs at the moment. Then realise that price is going to get much higher over the coming years.
But, hey, let's say I'm wrong about all this. Well.. point 3. The entire point of this tech is it's so, so easy, right? So... What is there to get "left behind" from? If it's 5 years later, and all the legal and ethical and functional and financial issues are resolved and it really is definitely here to stay, honest, then... You can just start using it. Because it's easy. Of course, that assumes that "prompt engineering" is a bit pointless, just people wasting their time. Because if it isn't, if you do actually need to carefully construct your instructions to get what you need out... Well, that's starting to sound actually rather involved. What was the benefit again? Anyway, we already have a way to fully and unambiguously describe the way we want a bit of code to work; it's called "programming".
zzrryll@reddit
I agree with you 100%. Most of these products are bleeding money.
AWS_CloudSeal@reddit
You are not wrong — and you are not entirely right either.
Your technical point is valid. MCP is JSON-RPC with better marketing. RAG is just retrieval with embeddings. Most AI "innovations" are existing CS concepts repackaged.
But here is the honest reality:
The developers winning right now are not the ones who only know AI and not the ones who refuse to touch it. They are the ones who understand both.
Think of it like this — knowing how a car engine works does not mean you should build your own engine every time you need to drive somewhere.
AI tools are productivity multipliers for developers who already know real CS. Without the CS foundation you cannot evaluate whether the AI output is correct, secure or efficient.
So your instinct to learn real CS first is actually correct. But dismissing AI completely means ignoring a tool that your competitors are using to ship faster than you.
Learn CS deeply. Use AI as a tool. Never let it replace your understanding.
The people who fall behind are not those who skip AI — they are those who skip fundamentals and rely on AI to think for them.
hireme-plz@reddit (OP)
Sure, it can spit out boilerplate faster, but honestly I’d rather write my own and know exactly what’s happening under the hood. Using AI feels like letting someone else drive your car while you sit in the passenger seat.
But i guess i should try just to know what i'm rejecting
aqua_regis@reddit
Even though I am very reserved for AI, I have to admit, boilerplate is where it really shines.
Let it take over the simple, menial tasks. I've done it for a Python tkinter GUI where I only told it to create the controls and empty functions for me. Later, I filled in the actual business logic.
Yes, you are somewhat in the passenger seat if you give it full control. If you only limit the scope it's more like driving on cruise control with lane assist. You are still in full control, but the AI helps you.
As I said, I am very reserved and generally prefer to write my own code, I have come to terms with letting AI do the menial tasks that only take time which I could better use to think about the actual business logic.
Some time ago, my stance (as an experienced programmer who learnt programming over 40 years ago and who has been working in the domain ever since) was pretty close to yours but with a few test runs, I have at least reduced my aversion as I could see the benefits in the long run.
I would never let it do something I couldn't program myself, nor do I let it handle business logic. That's my part. Yet, GUI creation, scaffolding, tasks that I'd outsource to a junior are what I give to AI.
DejvShorran@reddit
Yes, learn programming first. After that it's good to get experience coding with AI, cause that is what the industry expects. And this get's hard when AI creates thousands of lines of code that you need to mostly understand.
From my experience that hardest thing about coding with AI is understanding and controlling what it does. And for that you need good architecture and code understanding. So learning AI, for me, is getting experience coding with it and getting better at architecture and reading code.
hireme-plz@reddit (OP)
Actually, the most exhausting and annoying thing is the industry expects AI coders.
I wonder what's your good experience of coding with AI? Just pure curiosity
DejvShorran@reddit
I've had some mostly good experiences. I was teaching a TypeScript course and wanted to create some interactive exercises, so I created a website for that. It has a code editor in the browser with TypeScript support and it runs code and tests. I could never have done this in such a short time without AI.
Also, I've implemented some quick features on a large chess site that I've made before.
It works well when you give it specific tasks with in a system you know well. It does worse when you are just prompting without knowing the code and then it will eventually make a mess - just by creating too much code. I've done that mistake too.
A_GratefulDude@reddit
The best users of AI are going to be the people that understand problem solving and programming the best. Learn to get good at programming and CS, but also learn how to put your thought process into language. Before AI it was an important and overlooked skill to be able to communicate technical concepts in plain language, now it’ll be even more important.
WombatsInKombat@reddit
Why should I learn computer science? I can compute things just fine with my abacus.
spidermask@reddit
If you learn real computer science and programming you can easily make better use of LLMs both in writing better prompts and properly validating their results so it's automatically better than just "learning AI".
If you mean learning AI like genetic algorithms, neural networs, machine learning, feature engineering etc then that's just part of computer science and honestly quite interesting even if you don't want ro work in that field.
aqua_regis@reddit
There are two things you are mixing: skills and tools
You absolutely need to build up your own skills in order to leverage AI.
AI is a tool and if you don't have the underlying skills, a solid programming foundation, you cannot properly use that tool and even less are able to judge whether the result, the product of AI is usable or not, whether it has side effects, whether it is outright garbage.
AI can be very convincing in its "argumentation", yet be completely wrong and off-rails.
Even in order to properly prompt the AI you need a decent understanding of programming.
Only with solid skills, AI can be a help and speed your work. Without it, it just produces garbage and can actually slow you down.
hireme-plz@reddit (OP)
The logic is circular and exhausting. If I need to be an expert in the fundamentals just to babysit an AI and make sure it isn’t lying to me, then why am I wasting my time with the AI in the first place?
It feels like i'm too much AI hater. just want to code without hallucinating machine.
By the way, Good point that it is just tool. Only thing is that i'm not a fan of it
MSgtGunny@reddit
Babysitting AI is dumb. AI makes easy things easier and hard things harder. Use it for simple things (once you can actually do them yourself, as a beginner don’t use it for this) or as a potentially more robust ‘documentation search engine’ and call it a day (this use case is fine for a beginner, but is horribly energy inefficient for what you get out of vs just google)
Chlorek@reddit
Code by number of lines usually consists mostly of so called boiler-plate: code which sets things up, ensure proper future-proofing for changes and multitude of clean code patterns. Even single data object usually has few different versions (usually at least REST DTO, domain objects, DB representations). It’s very important that these rules are kept but also they are very time consuming due to amount of code. Being able to generate most of it and verify it saves a lot of work.
aqua_regis@reddit
I am also not a fan of it, not a complete hater, though, but still see its use. I've used it for a couple tasks so far, on two it was a great success, on a third one a complete failure - and I'm not talking about anything even remotely complex.
I use it mainly to do "menial" tasks for my hobby projects that I don't want to do - creating the scaffolding for programs, creating GUI scaffolding, etc. Things I could easily do myself without it. I usually don't let it anywhere near actual business logic.
In my line of work, AI is basically non-existent, even though I am a programmer. My systems are generally "dark sites", i.e. sites without any connection to the outside world, and highly sensitive areas. AI, not even local, has no place here. This is good old style manual programming. In my job, I can at utmost use it for presentations, emails, letters, some documentation "pretty up". That's it.
What aggravates me much more is that AI is currently being pushed everywhere, even in places where it has not even the faintest justification of existence.
typhon88@reddit
I’m sorry but thinks making sense no longer makes sense
AncientHominidNerd@reddit
AI shouldn’t be used as a replacement for skill. It should just be used as a tool. If you’re against using it then there is no reason to convince yourself to use it.
IAmFinah@reddit
Absolutely right. The grifters make out "learning to use AI" is some kind of skill that everyone should be focusing on. But no, it really is just a very basic skill and its learning curve is non-existent compared to learning a real skill, like programming.
kennlemy@reddit
Meanwhile me, no foundational learning of concepts. My claude code agent output: "In the next 1-2 weeks of these tickets landing, you go from "above average" to "genuinely best-in-class for a non-enterprise individual setup"
I am not sure of I am a genuis or dumb. But hey i know this will be controversial on this sub..
AssignmentDull5197@reddit
I get the skepticism, a lot of it is rebranding. The real shift is tighter feedback loops: tools + evals + autonomy, not "magic JSON". If youre curious, there are some practical breakdowns of agent patterns here: https://medium.com/conversational-ai-weekly
disastorm@reddit
I dont think they are comparable. "learning ai" is more like learning aws or gcp or something, its just learning the different toolsets and whatnot that are available, like you said MCP servers for example. Its not that these things are new, its just that they are tooling that is created for a "new" technology ( the ai models which are currently more powerful than they have been historically ).
If you learn toolsets, you still need to learn the programming.
Fast-Adeptness9669@reddit
I agree, there's no point. If you understand the subject, there's no need to conjure up promts.
OnionsOnFoodAreGross@reddit
Why not just learn to ride a horse and buggy? Seems like learning a real transportation would make more sense than driving a car. Kinda what I think about these types of comments. AI is just gonna be so good you just won't need to know the fundamentals. It's like long division. You don't use it.
Whatever801@reddit
Learning computer science and programming is a hard prerequisite for using AI effectively, and "influencers" are clueless and annoying. But it definitely increases productivity for experienced devs by an order of magnitude. I was a hater until January of this year but something changed with the models and I can't deny it anymore. There's nothing fancy about an MCP, as you said, but it allows you to integrate with 0 overhead. Example: today I got a support ticket about an object getting inexplicably duplicated. I gave the ticket number to Claude code and asked it to debug. It used the Atlassian MCP to fetch the details of the ticket. It then looked up the object ids from mysql and found the endpoints in my codebase used to create and edit this object type. Then it used the Observe MCP (my log provider) to find traces for those endpoints which correlate to the updated timestamp of those objects. It figured out that there was a PUT and POST request within 3 ms of each other. Then it went off and tried to convince me my user pressed save on 2 separate tabs within 3ms. I told it no that doesn't make sense so it looked at the DB and service level validations, found that to be fine, so it started looking at the frontend code and found a react race condition. It explained this to me and proposed a solution. I course corrected the solution a bit and it made the PR, deployed the fix to dev, and setup a playwright test to validate the changes. Fix is now in production. CI/CD build times excluded, this all took me about 15 minutes. I could have obviously done all of this myself but it would have taken me several hours.
Good-Discussion-9238@reddit
there is nothing to learn when it comes to ai, the machine does all the work, prompting takes zero skill, someone that's been using ai tools for 2000 hours and someone that just discovered ai tools exist have the same exact "skill" level when it comes to ai, because automation does not supplement work, the goal of ai is replace and displace work and workers, including programmers
thankfully ai still kind of sucks for alot of things, but even if it become perfect (at it likely soon will), i don't want to live in a world where people no longer use their brains, that sounds miserable
if it isn't clear enough, i hate all of the ai hype, it's existence and pray everyday that the bubble pops soon
trevorthewebdev@reddit
The thing is, and it kinda really sucks, you have to do it all now ... like you need to learn AI, like how to use AI and you need to know fundamentals. Maybe you don't need to rely on syntax as much as before, but also you kinda do. And you need to learn and continue to learn how AI is changing things. You might know how chat gpt works or the type of things it can do or not do, but do you know how Codex works with a small or big project. Do you know a bit about the agentic harness and how the latest version of Codex compares to Claude Code using Opus 4.6 ... actually that was two weeks ago, now it's Opus 4.7, but you need to know how to hand off to Sonnet or even Haiku. But also you need to really get coding fundamentals or design principles or how to at least read your code and know the syntax. Also do you know the top libraries and frameworks for all those languages?
minneyar@reddit
If the AI boosters are right, then AI is advancing so fast that sixth months from now it'll be completely unrecognizable from what it's like right now. You might as well just wait and mess around with it then since any time you spend on it right now will be wasted.
On the other hand, the fundamentals of computer science aren't going to change. If the bubble pops and they have to start charging 20x as much as they do right now for access to Claude (which is what they'd need to break even, let alone make a profit), the people who only know how to type prompts are going to be in trouble.
rafuru@reddit
You need to know the basics because you won't be able to validate whatever the AI regurgitates.
Now, do you need to learn AI?
Absolutely, at least prompt engineering. Take it like you're learning any other programming language and you're adding a tool to your skill set.
Own_Attention_3392@reddit
...except you write what you think and you get results out the other end that can massively speed up your development cycle as long as you're careful about reviewing and refactoring it.
It's poison to people learning but a tremendous boon to people who already know exactly what they want and how it should be implemented and just need a really fast typist.