then: michaelsoft binbows
now: microslop losedows
next year for sure: year of the linux desktop
Windows 11 is not being developed by people. It is entirely undeveloped by ai.
I have to use windows 11 and teams for all my work. Teams is being utilized as the central file management system.
The benefit these days is if something doesn’t work I can shrug and say ‘must be ai’ or ‘just windows 11 things’ and generally get tacot agreement.
using teams for file management
* Looks inside *
This is just SharePoint 🤮
This is just SharePoint
Looks inside
This is just OneDrive 🤮
This is just OneDrive 🤮
* Looks inside *
This is still just SharePoint!?!?!

Sounds like all the hard work they did refactoring Windows 10 is gonna go to waste with the new AI vibe coding in Windows 11.
If they keep like this, reactos might actually one day surpass windows into being a better windows-like os
Isn’t it already?
I’ve started using AI pretty heavily for writing code in languages I’m not as confident in (especially JS and SQL) after being skeptical for a while, as well as code which can be described briefly but is tedious to write, and I think the problem here is “by” - it would be better to say “with”
You don’t say that 90% of code was written by code completion plugins, because it takes someone to pick the right thing from the list, check the docs to see it’s right, etc.
It’s the same for AI, I check the “thinking”/planning logs to make sure the logic is right, and sometimes it is, sometimes it isn’t, at which point you can write a brief psudocode brief of what you want to do, sometimes it starts on the right path then goes off, at which point you can say “no, go back to this point” and generally it works well.
I’d say this kind of code is maybe 30-50% of what I write, the other 50-70% being more technically complex and in a language I’m more experienced in, so I can’t fully believe the 30% figure when you’re going to be having some people wasting time by not using it when they could use it for speedup, and others using it too much and wasting time trying to implement more complex things than it’s capable of - this one irks me especially after having to spend 3½ hours yesterday reviewing a new hire’s MR that they could’ve spent actually learning the libraries, or I could’ve spent implementing the whole ticket with some time left over to teach them.
Large language models can’t think. The “thinking” it spits out to explain the other text it spits out is pure bullshit.
Why do you think I said
"thinking"/planninginstead of just calling it thinking…The “thinking” stage is actually just planning so that it can list out the facts and then try and find inconsistencies, patterns, solutions etc. I think planning is a perfectly reasonable thing to call it, as it matches the distinct between planning and execution in other algorithms like navigation.
“Thinking” is just an arbitrary process to generate additional prompt tokens. In their training data now, they’ve realized people suck at writing prompts, and that it was clear their models lack causal or state models of anything. They’re simply good at word substitution to a context that is similar enough to the prompt they’re given. So a solution to sucky prompt writing and trying to sell people on its capacity (think full self driving — it’s never been full self driving, but it’s marketed that way to make people think it is super capable) is to simply have the model itself look up better templates within its training data that tend to result in better looking and sounding answers.
The thinking is not thinking. It’s fancier probabilistic look up.



