Are there any straightforward guides to setting up an SSH “jump server” for an old system? Let’s say I have a VAX VMS system I want to provide access to. People would ssh to cmhvax@mysite with a password I share, and each would produce a telnet connection to the VAX that they’d then have to log into with their real login and password. The past couple times I’ve looked into it, I got pretty lost in a forest of documentation… #retrocomputing#openvms
meeple! I once read a computer science paper about a theory for version control that modeled things via sets of file versions so that there would never be merge conflicts just files that are in multiple states simultaneously
I only know that #OpenVMS's filesystem #Files11 works with versions (tho limited to 32767 per file) and basically instead of deletibg or overwriting a file saves a new gull copy with the changes since last save included in it.
The following paper on the problems with non-scalable locking designs certainly explains some of the empirical (and app-dependent) behaviors observed with OpenVMS on larger Alpha and Itanium multiprocessors.
This is mostly OpenVMS on Alpha and Itanium multiprocessors, and now on x86-64 multicore designs, as VAX 9000 (Aridus) topped out at four (scalar) processors, and other later (and much more affordable, as compared with Aridus) VAX multiprocessors got to six or maybe eight processors.
In decades past, app performance on OpenVMS multiprocessors fell off pretty quickly starting around eight processors (or eight cores, more recently).
With the work that went into OpenVMS itself to break up spinlocks and to reduce related synchronization overhead, and with apps becoming more cognizant of NUMA hardware (where available), maintaining app performance on sixteen processors (cores) became more commonly feasible.
OpenVMS spinlocks are a somewhat simpler design than the ticket spinlocks Linux is currently using, too.
But somewhere along the path of adding more processors to scale app performance, Cache Coherency and Amdahl’s Law do come to visit, and performance craters.
@jfmezei Spinlocks, bitlocks, and interlocked queues are underneath most everything within OpenVMS.
The lock manager, the distributed transaction manager, and asynchronous system traps (ASTs) all synchronize their critical paths with these same primitives.
Apps and other OpenVMS services then use the locks provided by the lock manager for coordination, and ASTs for some (limited) asynchronous processing.
ASTs are a subset of threading, where a single process can have at most two threads (mainline non-AST-mode, and AST mode), and only one of these two can be active at a time. ASTs are fundamentally tied to a processor, as well. I’ve done a lot of AST-driven apps, particularly those using hibernation.
An aside for bug avoidance (read: learn from my mistakes): If you chose the AST-driven $hibernation/$wake design (which works very well), do remember to also issue a $schdwk scheduled wakeup call to un-stick a potentially stuck app, should a $wake call get lost somewhere. Multiple parallel $wake calls will get coalesced into one, and of course $wake calls can arrive at any time, among other wrinkles. Having a periodic wakeup call will unstick any potential race conditions.
Contrast ASTs with OpenVMS KP threads (KP threads are the underpinnings of pthreads), which allow the same app and its address space to be scheduled across multiple processors entirely in parallel.
if you’re interested in threading and such, also have a look at Apple GCD / libdispatch and blocks, as well:
I very much would have liked to have had something GCD-like available on OpenVMS, having implemented similar queued designs (absent blocks, which are also tied into compiler support) on many occasions.
"She explained further: “Often what you see in the industry is, ‘This is the Go native version of this other Rust or C++ library’ — you see this in every language ecosystem. We’re really just duplicating the same work and, in my opinion, it is just a waste of human effort and time."
@SmartmanApps I do like the callback to 1970's (or 80's) OSes. The #openvms operating system had an interop spec that made it "easy" to interoperate between a whole lot of languages at the time. C was hard b/c it was so tied to the file system. C++ had that problem plus method name munging.
So I get the article's points about the difficulty of defining an interop standard for #wasm that avoids all those issues - and presumably a host of others that might also exist.
@pervognsen@steve@cr1901@regehr The VAX/VMS kernel and a fair amount of VMS userland were written in VAX BLISS, and in VAX MACRO-32 assembler.
That implementation shifted with work to allow C in the VMS (later OpenVMS) kernel, and the remaining portions of BLISS and MACRO-32 in the current kernel are now far much smaller.
VSI still maintains BLISS and MACRO-32 compilers for OpenVMS on Alpha, Itanium, and x86-64.
@foone Back then when I was a HP-UX administrator I really hoped they would migrate it to x86_64 sometime. Maybe some company sells the IP like VSI with #OpenVMS :D
@kiwa Lovely device. Trying to get the ne for years - unfortunately they’re crazy expensive here in Germany. Would love to tinker with #OpenVMS on my one of these. But you can also run specific Windows versions on it - and of course Linux, BSD and Tru64 UNIX. You won’t regret it!
@dosnostalgic VAX/VMS circa 1978 used to boot with PDP-11 RSX-11M compatibility mode available and a fair chunk of the apps in the early VAX/VMS versions were RSX-11M apps running in compatibility mode.
The VAX-11 boxes supported PDP-11 instructions in hardware.
You could run your existing PDP-11 RSX-11M apps directly, too.
That all ended at VAX/VMS V4.0 (~1984), and with then-new VAX models after VAX 8600.
VAX 8600 was originally to be named VAX-11/790, but marketing marketed and dropped the -11 with the “architecture for the ‘80s”.
PDP-11 RSX-11M compatibility mode became a separate product, and the PDP-11 instructions were emulated, and the -11 was dropped from VAX.
Technically, an LSI-11 console processor booted RT-11 from the 8” console floppy which then booted the VAX-11/780 (organizationally within he hardware, the VAX was an enormous LSI-11 peripheral) which ran VAX and PDP-11 instructions and which could run simh emulator to emulate PDP-11 running RT-11. If the LSI-11 failed—as happened on a couple of occasions—the VAX could continue to run. Just not reboot.
The approach Apple used for migrations with Rosetta and Rosetta 2 was far smoother.
Yeah. Fun times. When it all worked.
There are shenanigans in newer boxes too, but they’re usually somewhat better hidden.
And to think, Alpha EV7z peaked at 1.3 GHz, and a top-binned Itanium Poulson, err, Kittson peaked at 2.66 GHz, and here an old and slow i5 is now well past both at 3.4 GHz. (Yes, I am well aware of the perils of comparing clock speeds across disparate architectures, but thanks for mentioning that.)
Before the web and HTML became ubiquitous, there were videotext terminals and similar software packages around, and probably most famously including the French Minitel system.
The DEC VTX videotext layered product for then VAX/VMS and MicroVMS (later OpenVMS) was inspired by Minitel.
While VTX wasn’t often purchased by DEC customers, nor used all that widely outside of DEC, there was a whole lot inside DEC using VTX and the Notes conferencing app inside DEC. VTX was replaced by other mechanisms, while Notes was deprecated but never died.
There are some accessible Notes conferences (free) at the DECUServe server, for those interested in experiencing that. I’m not aware of any sites still running DEC VTX.