• sashinator
    +7

    Programmer here

    it’s not a simple matter of “writing leaner code”

    the complexity of requirements has also changed and grown drastically

    it’s humanly impossible to maintain programs counting millions of lines of code in high level programming languages (and “compile” to - ie automatically get translated to - billions of lines of assembler code) by hand and without automation

    the software is bloated in part but in a much greater part the complexity of what it is required to do reflects it’s instruction set size

    for example the Linux operating system kernel subsystem (so just the kernel that gets put into operating systems like Android, Ubuntu or some server flavor like Debian which contain billions of lines of code outside of the kernel) has 25+ million lines of code which contains enough code that if published on paper (with no margins) would produce a stack of paper 38 meters tall

    that’s just the core of the operating system to manage CPU execution

    https://www.visualcapitalist.com/millions-lines-of-code/

    so quite a few things like that are contributing

    historically, abstracting fewer lines of human readership code to produce more lower “level” code is how we used to manage this problem

    we went from physically rewiring CPU circuits, to writing operation codes for general purpose prewired circuits, to writing assembler code to manage “memory addresses” which “compiles” to operation codes, to writing linguistic instructions to manage “memory” which compile to assembler code and finally, for the past 30 years, we have been using 4th generation high-level programming languages to define and model “objects” which compile to linguistic instructions which manage “memory”

    and each time we make one of these kinds of abstractions, we do it to give people comprehension and audit control over underlying system components. Anecdotally (I lost the link) a programmer in the 70’s could not resolve a bug in his assembler program because of the complexity and - so the lore goes - he had to take LSD in order to be able to visualize the full program model (and solve the bug) but then with advent of higher level linguistic programming languages that level of complicated assembler programs simply went away as humans were not required to manually code assembler programs by hand

    so yes we have a lot of bloated, inefficiently running software and part of the solution would be to offer better toolsets to manage that complexity with simpler human control system

    historically, what happens with these technological quantum shuffles is that they solve the complexity of legacy problems (or problems already solved with legacy technologies) but the solution is more easily modified, elaborated on and extended by hand and so people use them to those ends and that introduces new, previously undiscovered complexities (which then are in need of an even higher level approach to manage and prevent from collapsing under their own weight)

    really hope this makes sense to non-programmers... I did the best I could

    • StarFlower
      +11

      Also, I think it's important to distinguish inefficient code, bloat, and helpful abstractions for other humans to read (separate things in themselves) from important features. For example, I feel that there shouldn't be so much of a focus on lean code that important but non-end-user-visible things like security features are omitted or weakened.