Those operating systems already exist. You can run NetBSD on pretty much anything (it currently supports machines with a Motorola 68k CPU for example). Granted many of those machines still have an MMU iirc but everything is still simple enough to be comprehend by a single person with some knowledge in systems programming.
Mmm... would beg to differ. I have ported stuff to NOMMU Linux and almost everything worked just as on a "real" Linux. Threads, processes (except only vfork, no fork), networking, priorities, you no name it. DOS gives you almost nothing. It has files.
The one thing different to a regular Linux was that a crash of a program was not "drop into debugger" but "device reboots or halts". That part I don't miss at all.
This was interesting. It reminded me how fork() is so weird and I found some explanation for its weirdness that loops back to this conversation about nommu:
"Originally, fork() didn't do copy on write. Since this made fork() expensive, and fork() was often used to spawn new processes (so often was immediately followed by exec()), an optimized version of fork() appeared: vfork() which shared the memory between parent and child. In those implementations of vfork() the parent would be suspended until the child exec()'ed or _exit()'ed, thus relinquishing the parent's memory. Later, fork() was optimized to do copy on write, making copies of memory pages only when they started differing between parent and child. vfork() later saw renewed interest in ports to !MMU systems (e.g: if you have an ADSL router, it probably runs Linux on a !MMU MIPS CPU), which couldn't do the COW optimization, and moreover could not support fork()'ed processes efficiently.
Other source of inefficiencies in fork() is that it initially duplicates the address space (and page tables) of the parent, which may make running short programs from huge programs relatively slow, or may make the OS deny a fork() thinking there may not be enough memory for it (to workaround this one, you could increase your swap space, or change your OS's memory overcommit settings). As an anecdote, Java 7 uses vfork()/posix_spawn() to avoid these problems.
On the other hand, fork() makes creating several instances of a same process very efficient: e.g: a web server may have several identical processes serving different clients. Other platforms favour threads, because the cost of spawning a different process is much bigger than the cost of duplicating the current process, which can be just a little bigger than that of spawning a new thread. Which is unfortunate, since shared-everything threads are a magnet for errors."
That's fair. If so, then you still can have things like drivers and HAL and so on too. However, there's no hard security barriers.
How do multiple processes actually work, though? Is every executable position-independent? Does the kernel provide the base address(es) in register(s) as part of vfork? Do process heaps have to be constrained so they don't get interleaved?
There are many options. Executables can be position-independent, or relocated at run-time, or the device can have an MPU or equivalent registers (for example 8086/80286 segment registers), which is related to an MMU but much simpler.
Executables in a no-MMU environment can also share the same code/read-only segments between many processees, the same way shared libraries can, to save memory and, if run-time relocation is used, to reduce that.
The original design of UNIX ran on machines without an MMU, and they had fork(). Andrew Tanenbaum's classic book which comes with Minix for teaching OS design explains how to fork() without an MMU, as Minix runs on machines without one.
For spawning processes, vfork()+execve() and posix_spawn() are much faster than fork()+execve() from a large process in no-MMU environments though, and almost everything runs fine with vfork() instead of fork(), or threads. So no-MMU Linux provides only vfork(), clone() and pthread_create(), not fork().
Thanks! I was able to find some additional info on no-MMU Linux [1], [2], [3]. It seems position-independent executables are the norm on regular (MMU) Linux now anyway (and probably have been for a long time). I took a look under the covers of uClibc and it seems like malloc just delegates most of its work to mmap, at least for the malloc-simple implementation [4]. That implies to me that different processes' heaps can be interleaved (without overlapping), but the kernel manages the allocations.
Under uClinux, executables can be position independent or not. They can run from flash or RAM. They can be compressed (if they run in RAM). Shared libraries are supported on some platforms. All in all it's a really good environment and the vfork() limitation generally isn't too bad.
I spent close to ten years working closely with uClinux (a long time ago). I implemented the shared library support for the m68k. Last I looked, gcc still included my additions for this. This allowed execute in place for both executables and shared libraries -- a real space saver. Another guy on the team managed to squeeze the Linux kernel, a reasonable user space and a full IP/SEC implementation into a unit with 1Mb of flash and 4Mb of RAM which was pretty amazing at the time (we didn't think it was even possible). Better still, from power on to login prompt was well under two seconds.
> The original design of UNIX ran on machines without an MMU, and they had fork().
The original UNIX also did not have the virtual memory as we know it today – page cache, dynamic I/O buffering, memory mapped files (mmap(2)), shared memory etc.
They all require a functioning MMU, without which the functionality would be severely restricted (but not entirely impossible).
The no-MMU version of Linux has all of those features except that memory-mapped files (mmap) are limited. These features are the same as in MMU Linux: page cache, dynamic I/O buffering, shared memory. No-MMU Linux also supports other modern memory-related features, like tmpfs, futexes. I think it even supoprts io_uring.
That is not how a VMM subsystem works, irrespective of the operating system, be it Linux, or Windows, or a BSD, or z/OS. The list goes on.
Access to a page that is not resident in memory results in a trap (an interrupt), which is handled by the MMU – the CPU has no ability to do it by itself. Which is the whole purpose of the MMU and was a major innovation of BSD 4 (a complete VMM overhaul).
But three out of those four features: page cache, dynamic I/O buffering and shared memory between processes, do not require that kind of VMM subsystem, and memory-mapped files don't require it for some kinds of files.
I've worked on the Linux kernel and at one time understood it's mm intimately (I'm mentioned in kernel/futex/core.c).
I've also worked on uClinux (no-MMU) systems, where the Linux mm behaves differently to produce similar behaviours.
I found most userspace C code and well-known CLI software on Linux and nearly all drivers, networking features, storage, high-performance I/O, graphics, futex, etc run just as well on uClinux without source changes, as long as there's enough memory, with some more required because uClinux suffers from a lot more memory fragmentation due to needing physically-contiguous allocations).
This makes no-MMU Linux a lot more useful and versatile than alternative OSes like Zephyr for similar devices, but the limitations and unpredictable memory fragmentation issues make it a lot less useful than Linux with an MMU, even if you have exactly the same RAM and no security or bug concerns.
I'd always recommend an MMU now, even if it's technically possible for most code to run without one.
The original UNIX literally swapped processes, as in write all their memory to disk and read another program's state from disk to memory, it could only run as many processes as many times the swap was bigger than core, this is a wholly unacceptable design nowadays.
In an embedded scenario where the complete set of processes that are going to be running at the same time is known in advance, I would imagine that you could even just build the binaries with the correct base address in advance.
A common trick to decrease code size in RAM, is to link everything to a single program, then have the program check its argv[0] to know which program to call.
With the right filesystem (certain kinds of read-only), the code (text segment) can even be mapped directly, and no loading into RAM need occur at all.
These approaches saves memory even on regular MMU platforms.
How many of those legacy applications where the source is not available actually need to run natively on a modern kernel?
The only thing I can think of is games, and the Windows binary most likely works better under Wine anyways.
There are many embedded systems like CNC controllers, advertisement displays, etc... that run those old applications, but I seriously doubt anyone would be willing to update the software in those things.
You can't seriously expect a new GPU manufacturer to create a perfectly useable ecosystem on day one.
The drivers will surely get better over time and support for integrating the compute stack that they use will come if the incentive is good enough.
I really hope this doesn't turn out like HIP in AMD Radeon cards. That is absolute dog shit and has been dog shit for ages. It's really sad that an AMD card from 2017 is useless for compute while an equivalent Nvidia card from the same era is just now getting dropped by the latest CUDA versions.
My phone is a piece of junk from 8 years ago and I haven't noticed any degradation in browsing experience. A website takes like two extra seconds to load, not a big deal.
Honestly I don't see much of a problem if this is applied to imports of single items by an end user. I used to be that I had trouble importing some device partly because the power supply was not certified by the local regulatory entities. Most of what people import in single quantities are electronics with switch mode power supplies that work from 100-240v and at 50/60hz. I doubt many people are importing a hairdryer or a toaster. Personally if a power supply is approved by the FCC or some other important entity I consider it good enough for my personal use, even if it has a foreign plug.
It is a problem for importing large quantities to resell though, I'm not defending the ability to import 100s of death traps and sell them to people.
As I understand it, the most frequent type of importation by item is wholesale, we can be talking about the import of 500k phone chargers.
I think plug types are not a great risk as users will usually not want that. But in my head the risk is that we import 500k of something that technically works, but is off spec by 10V or 10hz, or the tolerance specs are too wide or too small. It's obvious how too small of a tolerance can cause issues, but too wide isn't ideal either, as there's tradeoffs, you end up importing swiss knife products. Which makes sense for big expensive electronics, but stuff like phone chargers? Subterranean or aerial cabling?
The task of verifying the quality of something is distinct from the task of verifying that it conforms to the local standards. And I wouldn't put it past the cargo culting governments to figure that if it's good enough for the US it's good enough for us.
My expectation is that the core clock circuit has its capacitance and/or inductance change, this changing the timing of the clock.
+/-5% is a region where everything in the digital domain probably still works. Your rise/fall time and dead-time / other critical timings need to be robust against some degree of variability. Transistors can have rather wide manufacturing variability after all (certainly wider than 5%).
So everything still works but the core clock is changing. Which btw, happens in traditional silicon circuits as they heat up or cool down.
A low precision RC oscillator changing by 5% or so between 20C and 100C is within expectations. I'm fact, a -50%/+100% change wouldn't surprise me.
--------------
Old var-caps (variable capacitors) by twisting them tighter or looser. No joke. So that's where my expectation that they've changed the capacitance of some core element that controls an important clock.
Many resistive materials, especially those that are semiconductors, have changes of resistivity caused by mechanical strain.
This so-called piezoresistive effect is frequently used for measuring the deformations of various objects, by attaching piezoresistive wires to them, which can measure for instance the amount of bending of the object.
Such a flexible integrated circuit might also have changes in the resistance of the transistor channels or of the interconnection traces, which will change the maximum permissible clock frequency. If an RC oscillator is used to generate a clock signal, its frequency will change with the bending of the circuit, more likely due to variations of the resistance than of the capacitance, because it is not likely for the bending to cause large variations in the thickness of the dielectric of the capacitors or in the area of the electrodes, even if that is also possible.
The variable capacitors whose capacitance is changed by twisting have this behavior because their electrodes overlap only partially and the twisting changes the area of the overlapping region. No such thing happens when twisting or bending a normal capacitor.
> which will change the maximum permissible clock frequency.
Emphasis on _permissible_ clock frequency. Because how is the core logic supposed to figure out how much the clock frequency changed or how much the resistance of the wires have changed?
> because it is not likely for the bending to cause large variations in the thickness of the dielectric of the capacitors or in the area of the electrodes, even if that is also possible.
Yes but no. Everything you said is correct, but you're looking at the wrong dielectric. The plastic PCB is obviously unchanging, even as it gets balled up.
However, there's another dielectric here that's normally ignored that suddenly becomes relevant. The _relevant_ dielectric (to this discussion) is the air. As the capacitor rolls up into a cylinder shape, the copper-air-copper capacitor has the dielectric (air) get thinner-and-thinner.
-------------------
However, to your point that this is "resistance"... the fact that "rolling one way" leads to -speed and "rolling the other way" leads to +speed suggests that its a resistance issue. Because the spring/resistance relationship is known. So stress/tension causes resistance of copper to grow, while pressure causes resistance of copper to drop.
If the oscillator is an RC-type oscillator (ex: a 555-timer like oscillator), then yes, I can see the resistance theory playing out. And 60kHz is slow enough that RC-type oscillators are possible.
> Because how is the core logic supposed to figure out how much the clock frequency changed
It is frequent for such logic circuits to use clock generators made with a so-called ring oscillator, i.e. with a chain of inverters containing an odd number of them, which is connected in a loop. The clock period will be a multiple of the delay through a logic inverter.
In this case the actual clock frequency tracks exactly all changes in the permissible clock frequency, regardless of their causes, including temperature and mechanical deformation.
> As the capacitor rolls up into a cylinder shape, the copper-air-copper capacitor has the dielectric (air) get thinner-and-thinner.
I am not sure which is the copper-air-copper capacitor to which you refer. On a PCB, there are parasitic copper-air-copper capacitors between traces, but they have very little influence on clock frequencies. On a normal integrated circuit, there is no air. The metal layers are separated by insulator layers and the top metal is covered by a passivation layer. This flexible circuit should also be covered by some passivation layer.
Replacing in your argument the copper-air-copper capacitor with a copper-insulator-copper capacitor, any circuit has two kinds of capacitors, those that are made intentionally, with two overlapped metal electrodes and a very thin insulator layer between them, and the parasitic capacitors that exist between any metal traces.
Your argument is valid for the parasitic capacitors, because the distance between traces will vary with bending and some parasitic capacitors will become larger, while others will become smaller. The effect of each of the parasitic capacitors on the permissible clock frequency is small and the global effect of all parasitic capacitors is unpredictable without a concrete circuit layout, because their changes with the bending may compensate each other.
For an intentional capacitor, the effect mentioned by you also exists, but in most technologies for integrated circuits the thickness of the insulator of the capacitors is very small in comparison with the lengths and widths of the electrodes. In this case only a very small part of the electromagnetic field is outside the internal space of the capacitor and its influence on the value of the capacitance is negligible. Perhaps the capacitors made with this flexible technology are not as thin in comparison with their area as in other technologies, in which case the effect mentioned by you could be measurable, but I doubt it.
An iPhone 8 still has a lot of processing power for headless home server tasks. I use a much weaker ARM dev board as an ssh gateway and Wireguard VPN into my home network and it works just fine. The only thing I'd worry about is leaving the battery on the phone and having it puff up after being trickle charged for months on end.
But if you remove the battery and mod the phone to power it directly from an external power supply you're all set!
It has been edited since the submitted title GP was talking about, which was something like your suggestion; possibly by mod team to desensationalise (I don't know).
I've always had a feeling that mixing caffeine and alcohol was a really bad thing, even when binge drinking as a teenager. Not that limiting myself to only alcohol was a healthy alternative or anything...
I wonder how much damage (if any) that caused considering I didn't do it very frequently. And how much damage could it do to someone that does it every weekend during their late teenage years?