Geek corner: tech discussions not suitable for other threads

None of that is the fault of NT. It was just how hardware was in those days. Plug and play was initially supported on the consumer versions of Windows (95, 98, ME) and came later to NT (NT, 2000, etc.)

Sure, someone supporting lots of users in a company will have a different perspective compared to a home user. When I built a new computer back then I’d set those DIP switches, install NT, do stuff in the control panel to configure things, and then leave it alone. I never had to change anything once that stuff was initially done.

You could always tell how functional any OS and software was by the tickets they generated. I was close to having poledit.exe tattooed on my ass.

NT was originally designed for the Intel i860 XR RISC processor and had a cool code name N10. By the time it was released it ran on 386 and 486 machines. It was new and exciting, but the System Policy Editor left much to be desired once a user base on an enterprise exceeded 500.

If nothing else, going from 3.51 to 4.0 woke someone up at Microsoft to design a functional OS with Active Directory to allow it to scale.

It’s more that I don’t like/consider it wasteful to throw away perfectly good hardware that can still be used. There’s also the economics of it all, do not buy new hardware just to get the latest and greatest, when what I still have still does its job, and is not dead slow. I don’t really need the newest and greatest, so I prioritise quality electronics and good build quality before the latest and greatest.

“That’s capitalism baby” does not excuse wasting resources.

For W11’s sake, I consider it more insidious than just greed. They wanted a firmer grip on customers and their metadata, through Microsoft accounts, and to shunt them into paying for subscriptions. Computers without TPM2 can run W11 just fine, if one can find the right tricks to tweak the installation process. But with local accounts, it gets harder for Microsoft to associate telemetry, metadata, and usage data with known computers and users. Thus they demanded TPM2, forcing people to upgrade computers unnecessarily. And Microsoft now effectively acts as the gatekeeper to your data and access to your computer, and are at the peril of Microsoft and the US administration(*)

I’m not quite sure what you mean here. The only problems I have experienced with installing Linux distros are if the computers are 32-bit systems (and only in the past few years), if they just don’t have enough RAM, or if they have too new hardware or hardware with proprietary specs. However, you’d be amazed how small and restricted you can make a virtual machine and still be able to install some Linux distro on it (heck, you can even install and run Linux in a PDF). But with standard hardware, one can make a good case with Linux being more backwards compatible than Windows, especially W11. The lowest-powered piece of hardware I have running with Linux here (not counting a Raspberry Pi 2 running pihole) is a NAS that I bought in 2012. It required a bit of work to get it upgraded to run a newer OS version since the model has been EOL’d for many years. Among other things, I had to double the RAM to 2GB DDR2 and do some sneaky circumvention tricks. It is now running perfectly fine, and the I/O capacity is capped by the network, not the hardware.

With low or underpowered hardware, I find that it is not the Linux OS that limits the computer’s performance, but rather the GUI and applications that are too heavy and/or bloated. If you want good backwards compatibility for even old hardware, Linux is in my opinion the better option.

I’m not quite sure I follow you. If you mean if I use FOSS, then yes. I’ve been a FOSS enthusiast for the past three decades or so.

It means locking you into a system, making it deliberately difficult to do stuff the system designers didn’t want you to do, and hard to break free (“Oooooh…look at those handcuffs! They are a bit tight and restrictive, but they are so shiny and pretty to look at!”). And they make it deliberately difficult or cumbersome for non-MS OSes to work with MS-based infrastructure and file formats.

I have also passed the forty year mark of being a computer geek, with around thirty years working professionally with and and being superuser user for diverse computer installations, but I have yet to find out how to make Windows do what I want it to do. I just keep falling back to Linux or Unix type OSes, as they are to me much easier to get up and running to do what I want. I guess we use the computers differently, and for different purposes.

(*) Consider the ICC judges that Trump picked out for sanctions — they have had their Google and Amazon accounts suspended. Their MS accounts were not suspended, but that could very well have happened. If so, their W11 computers could potentially have been bricked.

it proves them at compile time

It cannot catch logic errors at compile time. How could it? How would it know what any given piece of code is supposed to do?

it can’t catch logical errors, but you can prove it won’t ever panic/crash

you can prove that a function maps every possible input to the output type provided in the function signature, no exceptions

Not possible. There are many hardware issues that can cause a crash, like a hardware bug check exception. No compiler can catch that. Logic errors can also cause a crash.

Wouldn’t this be a variation of Turing’s halting problem? Instead of trying to determine whether a program will halt or not, you try to determine whether it will crash or not. If it is an equivalent problem, it would be undecidable.

That is just examining the directed graph connecting the input and output to functions, the contract. That is not sufficient to prove that it will not crash.

yes that is exactly what it does, it actually enforces the contracts/signatures you write; instead of giving you a rug pull with some random ass edge case you’ve never heard of, and was never mentioned in the signature.

That doesn’t mean you can determine whether it will crash or not, as it doesn’t account for algorithmic correctness.

if function1 accepts all apples and has been proved to always return an orange, and function2 accept all oranges and always returns a banana (no matter what), then calling function2(function1(apple)) will always return a banana; whether or not that is a good idea. What it won’t do is throw an unadvertised error/panic at 3am that you need to have years worth of experience to even know is hidden inside your own function. Also the halting problem involves a (hypothetical) universal system that can tell if arbitrary code will halt, not specific code. It is quite easy to prove that a particularly simple program with only 1 instruction is going to halt. Right?

A modern OS like Windows, MacOS, or Linux is far too complicated to make blanket statements like “using a specific compiler (rust in this case) is sufficient to prevent crashes”. That’s just not possible given the complexity and the large number of paths through the code.

Careful coding by experienced engineers can prevent the vast majority of issues that languages like rust are intended to prevent. They’re not magic–they just enforce conventions that good developers can enforce themselves. Thirty years ago I worked at a company that developed a proprietary operating system used by customers in the European nuclear industry. In the eight years I worked for the company, only one bug was reported against this OS from customers in the field. This OS was written in a combination of assembly and C and we took great care with our coding practices not to introduce race conditions, memory leaks, etc. Our biggest weapon against coding errors was code reviews, where we scrutinized the code literally line-by-line and caught issues that would have caused runtime issues. The developer responsible for each error discovered during the code reviews had to write a formal response describing the error, why it occurred, the proper fix, and the steps he/she would take to prevent similar errors in the future. Another organization that developed nearly bug free code was the one that developed the flight software for the space shuttle. I doubt Microsoft and Apple use a development process as rigorous as this, and I know the ad hoc system used for Linux development doesn’t.

The bottom line is it’s perfectly possible to write correct, bug free code in C–it’s just that the compiler doesn’t hold the developer’s hand and enforce data safety–it’s up to the developer to do that. In the industry, in my experience, there are developers, and then there are super developers. The best developers tend to be around 10x as productive as average developers, and the best ones create code with significantly fewer design issues and bugs. Unfortunately, these super developers are rare and most companies might have one or two of them, along with dozens of average to mediocre developers. The result is a code base full of design flaws and bugs that result in security issues. That’s not the fault of the language–it’s the low competence level of the majority of the developers who wrote the code. That is compounded by the push to quickly bring things to market and the push to reduce development costs.

If a development team doesn’t have the competence to develop code in a rigorous manner, for whatever reason, then perhaps they should use rust to prevent commonly avoidable mistakes in coding.

On having decent hardware… I like having up-to-date sh#t. If I can use the older hardware, I do… if not, I sell it. These days I’m tending towards mini-PCs for my back end stuff although I still love my full tower desktop because of all the things I can do with it and how I can upgrade it.

Perhaps not but it is a reality of the world we live in. Do you still ride a Penny-farthing, drive a car from the 70s, use a non-smart mobile phone or live in a ramshackle hut with no electricity or running water, perhaps even made of mud and trees? Maybe you do some of these, I don’t know you, but I’m betting you don’t because you, like me, buy into the latest stuff to some extent so it’s all relative; we’re all victims of progress and of capitalism.

Hell, I could still install Windows 3.1 if I could find my 3½" floppy drive (sure it’s USB but I still have one), 9 or 15 disks wasn’t it? I suspect I might have an issue with hardware support though.

IMO Windows 11 is pretty good… granted that’s as long as you remove all the bloatware and reconfigure the menu to be like the previous iteration and a few other things. I’m happy to try out new versions but the one thing I want from MS, they won’t give me… a version of Windows that has no f#cking bloatware (no AI or any sh#t like that) and the ability to completely customise it any way I want from scratch. They won’t and, even though they appear to have backed off a little from their move towards an “agentic OS” they won’t give me what I really want… an OS and nothing else i.e. no AI, no browsers, no AV, no apps, no bloatware no anything. That doesn’t equate to wanting Linux (which also doesn’t provide that unless you go for Linux-From-Scratch or that other weird one I tried a decade ago), it just means I can envisage a better Windows. And even if I do decide to adopt Linux, there’s the implied point you didn’t address when I responded about desktop environments, that there are thousands of Linux distros, each often with several variants and that it’s almost pointless asking for advice on which to choose because they’re all different even when they have the same damned DE. IMO they also suffer from the bloatware problem.

You’re absolutely right about the US Administration though and it’s something I should think on more especially since I try not to buy anything else American right now. In my defence I don’t have a Microsoft account on my PC.

I’m no a particular fan of virtual machines (potentially as backend, when I don’t really go on them like a desktop) except much but, having installed Linux on a Pi 3 just a couple of years back, no I wouldn’t be surprised. I’ve rarely had real issues getting older software to work on Windows… I was in IT remember.

Yes, I meant OSS.

Any OS will lock you into its ecosystem, even Linux, in my experience it’s the apps that truly lock me in… like I mentioned (or hinted at) before, while I’ve found a decent replacement for MS Office (that isn’t LibreOffice), I want a decent eMail client to replace Outlook, one that isn’t Thunderbird because I happen to value aesthetics. However, those are negotiable but some things aren’t such as my writer’s program (“Scrivener”) and a grammar checker (“Pro Writing Aid”) which are far superior to any others available on Linux.

As you say, we use the computers differently, and for different purposes.

That link is behind a paywall but here’s one for you in return which highlights the kind of thing I’m saying i.e. that Linux isn’t anywhere near as easy as some suggest:

UK Atheist

Yes and no.

It isn’t that I’m not competent to do all that stuff or that I wouldn’t find it fun. It is just that for the problem domains I operate in (broadly, line-of-business software), no one will pay me to do it, particularly in C or C++. It’s not the correct level of abstraction.

Languages like C# on the other hand will do more of that on my behalf at acceptable (even, impressive) performance levels, so that I can focus on business logic. And MSFT has invested massively in fine tuning the runtime and its memory management. The optimizations work on .NET 10 alone ran into hundreds of pull requests; each major release makes significant advances.

I still wouldn’t use C# for low level OS or gaming code or maybe for a large complex product like Office, but they have eliminated “stop the world” garbage collection and other criticisms such that IMO it’s fine for almost everything else. And you can now produce self contained native executables if you really want them.

One can make similar arguments such as “programmers these days aren’t competent enough to write in assembly language” but it is really more about the sweet spot for the abstractions you’re using in the problem domain you’re addressing, which governs demand. As the abstractions become higher level, there’s more work that’s feasible to be solved in software.

C# has come a long way since its inception and can now be used for almost any task short of something that needs the ultimate performance. Visual Studio itself is written in C#, and a large, complex project like Office could be written in modern C# without a noticeable decrease in performance. One of the applications I use for software defined radio is written in C# and it does DSP in real time, including an FFT to display a continuous spectrum waterfall.

The latest version can do AOT instead of JIT compilation, resulting in a native code executable that starts up faster, even though it still does garbage collection. Managed code environments like C#/.NET are great for most things, but it wouldn’t make sense to write an operating system in it. That’s where languages like C and rust come in, but for nearly anything else, C# is fine.

The situation on MacOS is somewhat different. The modern way to write apps for the MacOS/IOS ecosystem is to use Swift and SwiftUI. Swift isn’t a managed code system in that it compiles directly to native machine code and uses automatic reference counting rather than garbage collection to manage memory. This results in it being more deterministic, but requires manual management of memory and an understanding of how weak references work. Once Swift memory management is understood, it’s not onerous to deal with. And unlike C#, Swift uses a POP (Protocol Oriented Programming) model rather then OOP (Object Oriented Programming), i.e. it uses composition rather than inheritance.

OK,

Trying to recuse myself from the Linux vs Windows thing (let’s be honest, no one’s gonna win as we’re all fans of whatever we like and with what seem like good personal reasons), how about 3D printing? I know there’s at least one person here who does it.

My one and only 3D printer (a FlashForge AD5X) broke down a couple of weeks back and, while I have every hope of getting it fixed through their support channel, that scuppered my plans to use said printer for financial gain. Given that I’ve realised that colour printing is largely pointless (or at least expensive if the prints aren’t designed properly), I figured it might be better to have a second printer, a “workhorse” so to speak.

I’ve got a maximum of £250 to spend on one… sure I could go higher if I sold more stuff on eBay or whatever but it seems good to set a limit. What I’m after is a mono, FDM, that’s reasonably fast, has a good PEI bed and [maybe] a bit better than my AD5X’s 225x225 print area.

The one’s I’m considering are:

  • The Creality Hi (£209 from Creality)
  • Ender-3 V3 SE (£149 from Creality)
  • The FlashForge AD5M (£209 from Amazon but it can probably be got cheaper elsewhere)
  • The Anycubic Kobra 3 (£199 from Anycubic)

Does anyone have any other suggestions as to a reasonably competent budget filament printer? I’m tempted by secondhand ones but don’t really trust eBay (ironic since I’ll sell on it).

UK Atheist

I favor composition over inheritance but there are times (not nearly so often as we thought in the early days of OOP) that inheritance is extremely useful. I am not familiar with Swift’s prototyping but find JavaScript, which also goes that route, a bit counter-intuitive in that regard.

That said, I imagine that I could get by just fine without an inheritance mechanism (though I think code would be a little less self-evident as a result). It isn’t unimaginable to me because I started doing software development in the early 1980s before OOP was a thing (outside maybe academic circles and SmallTalk). I didn’t really start using it until the 90s.

1 Like

yeah i never understood why inheritance was so popular, it doesn’t seem to model the real world, outside of genetics.

It was just another innovation that was supposed to solve all or most of the problems in software development, and it was oversold. At this point in my career (year 43) I’ve seen countless such things come and go. Many of them are still around but just with realistic expectations and more limited use cases than were hoped for.

So-called “AI” is just the latest. It’s not going away but it’s way more useful and appropriate for certain applications (e.g. drug discovery, reading subtle patterns in radiological images) but it’s not going to lead to AGI or replace creatives or whatever.

One of the silliest “innovations” I’ve seen in my time in the industry is the trend to abstract everything to the nth degree, so that even simple operations result in so many function calls that performance goes to pot. It’s no wonder that applications that ran just fine on machines with 256MB of memory now require 8-16GB.

Part of this is the “design patterns” trend, which encourages silly constructs like AbstractFactoryConstructorDelegationVisitorSingletonFactory().

1 Like