The incredible irony of putting retired programmers to work through “uber for cobol freelancers” because every state’s unemployment system is overloaded.
I don’t understand why they need COBOL programmers when it is a frontend load problem. Let some cheap Javascript guys write a new frontend with a cache.
Writing performant five nines scalable front end systems takes more than “some cheap Javascript guys” and a cache.
This would be private cloud k8s (systems engineering / dev ops / security), and maximum compatibility for any device (SSR + cache), and be able to serve native apps as well as web pages. For proper horizontal scaling there would be microservices for all sorts of things like authentication, user profile, transaction history etc.
Who said anything about “five nines”? All we know from the article:
A couple of hundred claims a week would be a lot in normal times, he says. When the coronavirus hit, millions of claims suddenly were being filed and hitting the front end, which could not handle the massive increase in volume.
I’d say it wouldn’t be that critical if was down half the time.
Not sure of these “millions of claims” were submitted within days, weeks, or months. So we have no clue about the volume. Maybe a single server would be fine. But who am I kidding, the cheap Javascript guys will probably build a distributed cloud monster anyways…
As someone who works in an "Rational" environment, I don't really see a way out because that is just how this industry works. It isn't about changing the way how our organization works, it would require changing our customers.
It is almost like the things like PMBOK (which now changed to a principles-based body of knowledge)... these things have no base in scientific method (empirically-based), having origins back to all the DOD needs
Also reminds me of this important research article "The two paradigms of software development research" posted here before https://group.lt/post/46119
The two categories of models use substantially different terminology. The Rational models tend to organize development activities into minor variations of requirements, analysis, design, coding and testing – here called Royce's Taxonomy because of their similarity to the Waterfall Model. All of the Empirical models deviate substantially from Royce's Taxonomy. Royce's Taxonomy – not any particular sequence – has been implicitly co-opted as the dominant software development process theory [5]. That is, many research articles, textbooks and standards assume:
Virtually all software development activities can be divided into a small number of coherent, loosely-coupled categories.
The categories are typically the same, regardless of the system under construction, project environment or who is doing the development.
The categories approximate Royce's Taxonomy. ... Royce's Taxonomy is so ingrained as the dominant paradigm that it may be difficult to imagine a fundamentally different classification. However, good classification systems organize similar instances and help us make useful inferences [98]. Like a good system decomposition, a process model or theory should organize software development activities into categories that have high cohesion (activities within a category are highly related) and loose coupling (activities in different categories are loosely related) [99].
Royce's Taxonomy is a problematic classification because it does not organize like with like. Consider, for example, the design phase. Some design decisions are made by “analysts” during what appears to be “requirements elicitation”, while others are made by “designers” during a “design meeting”, while others are made by “programmers” while “coding” or even “testing.” This means the “design” category exhibits neither high cohesion nor loose coupling. Similarly, consider the “testing” phase. Some kinds of testing are often done by “programmers” during the ostensible “coding” phase (e.g. static code analysis, fixing compilation errors) while others often done by “analysts” during what appears to be “requirements elicitation” (e.g. acceptance testing). Unit testing, meanwhile, includes designing and coding the unit tests.
For now I mostly see work like these towards the construction phase. Little by little we automate the whole thing.
Just me doing a literature note:
Authors try to find faster algorithms for sorting sequences of 3-5 elements (as programs call them the most for larger sorts) with the computer's assembly instead of higher-level C, with possible instructions' combinations similar to a game of Go 10^700. After the instruction selection for adding to the algorithm, if the test output, given the current selected instructions, is different form the expected output, the model invalidates "the entire algorithm"? This way, ended up with algorithms for the "LLVM libc++ sorting library that were up to 70% faster for shorter sequences and about 1.7% faster for sequences exceeding 250,000 elements." Then they translated the assembly to C++ for LLVM libc++.
softwareengineering
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.