Oop

oop vs procedural notes
Procedural and functional should be the only paradigms. A problem is by definition problematic, it cannot be reduced. OOP is a scam foisted on programmers by academia, as it gave them a reason to complexify something understandable so they could produce endless papers. It also allows for a NSA Encryption attack vector, I2p is written in java for example. Nobody knows what is meant with oop. All programming is procedural, object in oop is a dissimilar term for message, message programming. This is adds a layer off Rube Goldberg contrivance to what remains procedural in nature. "Everything is an object" goes the slogan, but is everything really a message? Data is inert, we don't think of 'messages' when mapping data from the domain to the range with a mathematical function. Integers, float and double in C are objects, the behavior is encapsulated. An assembler function transforms inert data on reception of a message. Straight C can be used to emulate a higher level OOP like python. Python allows mixing of both procedural and message programming while Java doesn't. Because OOP isn't defined, it allows for the mistake that int,float etc. aren't message receptors or "objects" then.

An object is a message receptor. The object instance receives a message and calls a function which transforms the data or "fields". The fields are stored in a dictionary,with inheritance they are global in scope just like a hash table is global in procedural. Alan Kay on message programming

Oop libraries are only a collection of functions. Inside classes they are called "methods" to obfuscate this, a semantic brew is stewed to try and make it seem that oop isn't a calvin hobsy way of procedural programming. You can spend two hours rewriting code in procedural or spend two hours looking for a function buried in some class. Having to rewrite code is inevitable and preferred over endless searching.

A python class is a complex wrapper around a dictionary - so, if anything, you are adding more overhead than by using a simple dictionary. A Python dictionary is internally implemented with a hash table. A hash table is a data structure that maps keys to values by taking the hash value of the key (by applying some hash function to it) and mapping that to a bucket where one or more values are stored. If you are working on performance critical applications, use C or something. https://stackoverflow.com/questions/35988/c-like-structures-in-python.

stop writing classes
https://www.youtube.com/watch?v=o9pEzgHorH0  Classes are great but they are also overused. This talk will describe examples of class overuse taken from real world code and refactor the unnecessary classes, exceptions, and modules out of them.

Steve Yegge
http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom-of-nouns.html Object Oriented Programming puts the Nouns first and foremost. Why would you go to such lengths to put one part of speech on a pedestal? Why should one kind of concept take precedence over another? It's not as if OOP has suddenly made verbs less important in the way we actually think. It's a strangely skewed perspective. As my friend Jacob Gabrielson once put it, advocating Object-Oriented Programming is like advocating Pants-Oriented Clothing.
 * https://plus.google.com/u/0/110981030061712822816/posts/KaSKeg4vQtz

It's easier for me to understand a system if it's written in a procedural language like C even without its documentation. But the same system written in Java is very hard for me to grasp without it's Doc and UML. When going to an unfamiliar place, for example, it's easier to follow the instruction that directs you how you can go there (e.g, go straight; at the 3rd street, turn left; cross the first pedestrian and you will see the train station, blah blah...) than finding objects of which state and behavior are unfamiliar to you (e.g, when you see a big blue rectangular building, it's the most famous shop, if it's not blue but is big, it's probably the hospital; neglect those shop and hospital because you must pay attention to the yellow building which is the restaurant. Continue roaming around the city until you see the shop, the hospital, and the restaurant which are beside each other; When you found the right restaurant and it is open, enter and eat some food there, otherwise, go home and cook your own food!).

oop failure
from https://www.quora.com/Was-object-oriented-programming-a-failure. The Catch-22 to the whole equation - and the part which corporate execs seem to understand the least - is that Object-Oriented is strictly an organizational paradigm. ALL actual software is strictly Procedural. No matter how the source code is laid out (wholly for the convenience of the programmer(s)), data is data and computers don't care if you're using OOP, Procedural.

Alexander Stepanov's complaint is blistering and accurate. If you read Types and Programming Languages, you get a sense for just how much complexity objects add to your world. OOP, as commonly envisioned, doesn't play well with static or dynamic typing.

Is OOP a failure? Well, what is it? I've heard OOP given about 12 definitions, all credible in some core way, but many conflicting. Like "Scrum", it's too all over the place to justify a closed-form, final opinion. It's either highly beneficial or loathsome depending on which interpretation one uses. There's good OOP and bad OOP. This should be no surprise: in the anti-intellectual world of mainstream business software, it's mostly bad OOP. (For "Scrum", there's the same sad story.)

Separation of implementation and interface is a clear win. That's not limited to OO languages, of course. Haskell has type classes, Clojure has protocols, and Ocaml has (if you're brave) functors. Nonetheless, I'm going to score that as a clear Good Idea that OOP championed early on.

For that, Alan Kay's inspiration was the biological cell. Alan Kay is one of the best software designers alive, and has been extremely critical of modern OOP. Now, the cell: it's an intricate, convoluted machine, almost on the verge of collapsing under the weight of its own complexity. In a larger organism, cells communicate through a simpler interface: chemical signals (hormones) and electric activations. If they coupled more tightly, the organism wouldn't be valuable. Kay was not saying, "you should go out and create enormously complex systems". OOP, to him, was about how to manage it when complexity emerged. In this way, OOP and functional programming (FP) were actually orthogonal (and could support one another) rather than in conflict. It was still desirable that objects do one thing and do it well; but interfaces were intended to underscore that "one thing" when the demands on the implementation made it hard to tell what that was.

OOP and FP (and, in reality, all higher-level languages) both exist to answer the question, "How do we prevent software entropy?" See, Alan Turing's result on the Halting Problem isn't about termination or about machines and tapes. It's the first of many theorems establishing the same thing: we can't reason, in any way whatsoever, about arbitrary code. It's mathematically impossible. Obvious solution: "don't write arbitrary code." (Most code that a person would write to solve a problem is in a low-entropy region where reasoning about code is possible.) Equally obviously, no one does write "arbitrary code". Generally, we don't go very far at all into that chaotic space of "all code", and that's good. However, as the number of hands that have passed over code increases, it gets further into that high-entropy/"arbitrary code" space. FP and OOP are two toolsets designed to prevent it from getting there too fast. FP enforces simplicity by forcing people to think about state and mutability, encouraging code that can be decomposed into "do one thing" components-- mostly mathematical functions. OOP tries to make software look like "the real world" as can be understood by an average person. (CheckingAccount extends Account extends HasBalance extends Object). The problem is that it encourages people to program before they think, and it allows software to be created that mostly works but no one knows why it does. OOP places high demands on the creators of the machinery (in effect, a new DSL) that will be built to solve a problem. Because of the high demands OOP places on human care of the software, the historical solution has been to have elite programmers (architects!) design and peons implement; that never worked out for a number of reasons-- it's hard to separate capability from political success, the best programmers don't want to be around with lines and boxes and DDL, business requirements are still a constant source of increasing complexity (with outdated or unwanted requirements never retracted).

What went wrong? People rushed to use the complex stuff (see: inheritance, especially multiple) when it wasn't necessary, and often with a poor understanding of the fundamentals. Bureaucratic entropy and requirement creep (it is rare that requirements are subtracted, even if the original stakeholders lose interest) became codified in ill-conceived software systems. Worst of all, over-complex systems became a great way for careerist engineers (and architects!) to gain "production experience" with the latest buzzwords and "design patterns". With all the C++/Java corner-cases and OO nightmares that come up in interview questions, it's actually quite reasonable that a number of less-skilled developers would get the idea that they need to start doing some of that stuff (so they can answer those questions!) to advance into the big leagues.

procedural is oop
https://www.quora.com/Was-object-oriented-programming-a-failure Okay, ready to drop the bomb? Everything is an object-like in your favourite non-OOP language.

C ints are objects, and I do not mean in the Java sense. They encapsulate the underlying binary. They have well-defined behaviour, namely arithmetics. You have that polymorphism stack of short, long, long long leading to that devilish char.

Haskel functions are objects, and I do not mean in the Java sense. They encapsulate the underlying algorithm. They have well-defined behaviour, namely being callable.

OOP and FP are just two representations of data and operations. It is like two sides of a fourier transformation. One is ugly, the other is sleek. Which is which depends entirely on the issue at hand.

So let me answer the actual question: OOP is not a failure. FP may be more attractive to an algorithm designer, but I have a truckload of tasks that are stupidly difficult to express functionally.

What is a failure is thinking strictly in one category or another. Some tasks require OOP, some require FP, but most can be expressed either way. Whether either or the other is good idea depends on the use case…

1.5k Views · 5 Upvotes
 * Without loss of generality, FP just makes the analogy nicer. The argument would work for composition models and other stuff. It is all binary anyways.

cat

 * http://nuthole.com/blog/2004/02/05/musings-on-an-interview-with-alex-stepanov/ STL is not OOP.
 * http://harmful.cat-v.org/software/OO_programming/why_oo_sucks "Objects bind functions and data structures together in indivisible units. I think this is a fundamental error since functions and data structures belong in totally different worlds. If a language technology is so bad that it creates a new industry to solve problems of its own making then it must be a good idea for the guys who want to make money. This is is the real driving force behind OOPs."
 * http://www.smashcompany.com/technology/object-oriented-programming-is-an-expensive-disaster-which-must-end
 * https://whydoesitsuck.com/cpp-sucks-for-a-reason/ In my opinion, C++ is this weird Frankenstein velociraptor that somehow survived the dark ages of programming and is now constantly being revived and patched up. The members of the standardization committee are trying hard to make things a little better by applying tons of makeup over its wrinkles. Obviously, this doesn’t work out too well since it’s still the same ugly beast under the mask. The problem here is that C++ is old – and I mean the antique kind of old that deserves to be put into retirement.

Torvalds
> > As it is right now, it's too hard to see the high-level logic thru > this endless-busy-work of micro-managing strings and memory.
 * https://lwn.net/Articles/249460/ Torvalds on C++ mess. So I'm sorry, but for something like git, where efficiency was a primary objective, the "advantages" of C++ is just a huge mistake. The fact that we also off people who cannot see that is just a big additional advantage. If you want a VCS that is written in C++, go play with Monotone. Really. They use a "real database". They use "nice object-oriented libraries". They use "nice C++ abstractions". And quite frankly, as a result of all these design decisions that sound so appealing to some CS people, the end result is a horrible and unmaintainable mess. But I'm sure you'd like it more than git.
 * https://web.archive.org/web/20080304231021/http://article.gmane.org/gmane.comp.version-control.git/57961 On Thu, 6 Sep 2007, Dmitry Kakurin wrote:

The string/memory management is not at all relevant. Look at the code (I bet you didn't). This isn't the important, or complex part.


 * > IMHO Git has a brilliant high-level design (object database, using > hashes, simple and accessible storage for data and metadata). Kudos to > you! > The implementation: a mixture of C and shell scripts, command line > interface that has evolved bottom-up is so-so.

The only really important part is the *design*. The fact that some of it is in a "prototyping language" is exactly because it wasn't the core parts, and it's slowly getting replaced. C++ would in *no* way have been able to replace the shell scripts or perl parts.

And C++ would in no way have made the truly core parts better.

> > and comparing C to assembler just shows that you don't have a friggin idea > > about what you're talking about. > > I don't see myself comparing assembler to C anywhere.

You made a very clear "assembler -> C -> C++/C#" progression nin your life, comparing my staying with C as a "dinosaur", as if it was some inescapable evolution towards a better/more modern language.

With zero basis for it, since in many ways C is much superior to C++ (and even more so C#) in both its portability and in its availability of interfaces and low-level support.

> I was pointing out that I've been programming in different languages > (many more actually) and observed bad developers writing bad code in > all of them. So this quality "bad developer" is actually > language-agnostic :-).

You can write bad code in any language. However, some languages, and especially some *mental* baggages that go with them are bad.

The very fact that you come in as a newbie, point to some absolutely original author doesn't like, is a sign of you being a person who should be disabused on any idiotic notions as soon as possible.
 * trivial* patches, and use that as an argument for a language that the

The things that actually *matter* for core git code is things like writing your own object allocator to make the footprint be as small as possible in order to be able to keep track of object flags for a million objects efficiently. It's writing a parser for the tree objects that is basically fairly optimal, because there *is* no abstraction. Absolutely all of it is at the raw memory byte level.

Can those kinds of things be written in other languages than C? Sure. But they can *not* be written by people who think the "high-level" capabilities of C++ string handling somehow matter.

The fact is, that is *exactly* the kinds of things that C excels at. Not just as a language, but as a required *mentality*. One of the great strengths of C is that it doesn't make you think of your program as anything high-level. It's what makes you apparently prefer other languages, but the thing is, '''from a git standpoint, "high level" is exactly the wrong thing. (Linus)'''

rebol

 * http://www.rebol.com/article/0425.html In its purest form, OO is a model of associating behavior with state (function with data). Originally, back in 1982, it seemed like a good idea because real world objects had specific actions related to them. A pen was used to write and draw. A pencil was used to write and draw. We thought, "Wow, there's a pattern, and it seems to be quite natural." However, it was a false model. A pen does not write and draw, it takes a human to make a pen write and draw. The actions of write and draw do not belong to the pen. OOL is not a complete solution. Too many of the behaviors of objects come from (or are influenced by) sources that are external to their encapsulated definitions.
 * http://www.yegor256.com/2016/08/15/what-is-wrong-object-oriented-programming.html
 * http://www.separatinghyperplanes.com/2014/10/on-object-oriented-programming.html

,, http://blog.berniesumption.com/software/inheritance-is-evil-and-must-be-destroyed/ ,  http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay_oop_en  ,   http://axilmar.blogspot.com/2014/10/object-oriented-programming-is-disaster.html rebuttal https://en.wikipedia.org/wiki/Object-relational_impedance_mismatch
 * https://medium.com/@brianwill/object-oriented-programming-a-personal-disaster-1b044c2383ab
 * all evidence points to oop being a disaster links to http://wiki.c2.com/?ArgumentsAgainstOop
 * https://blog.codinghorror.com/rethinking-design-patterns/
 * https://www.youtube.com/watch?v=RdE-d_EhzmA David West
 * https://news.ycombinator.com/item?id=3641212
 * http://www.smashcompany.com/technology/object-oriented-programming-is-an-expensive-disaster-which-must-end
 * http://lucacardelli.name/Papers/BadPropertiesOfOO.html
 * http://web.archive.org/web/20080710144930/http://gagne.homedns.org:80/~tgagne/contrib/EarlyHistoryST.html

http://www.shenlanguage.org/

http://whiley.org/2010/06/23/rich-hickey-on-clojure-se-radio/

https://8thlight.com/blog/colin-jones/2012/06/05/on-obsessions-primitive-and-otherwise.html

Paul Graham
http://www.paulgraham.com/avg.html

http://www.paulgraham.com/hundred.html Somehow the idea of reusability got attached to object-oriented programming in the 1980s, and no amount of evidence to the contrary seems to be able to shake it free. But although some object-oriented software is reusable, what makes it reusable is its bottom-upness, not its object-orientedness. Consider libraries: they're reusable because they're language, whether they're written in an object-oriented style or not. http://queue.acm.org/blogposting.cfm?id=34658

http://www.paulgraham.com/noop.html, http://www.paulgraham.com/reesoo.html

http://harmful.cat-v.org/software/OO_programming/why_oo_sucks

http://www.artima.com/weblogs/viewpost.jsp?thread=141312

http://batsov.com/articles/2011/05/12/jvm-langs-clojure/

blot

 * OOP is about taming complexity through modeling, but we have not mastered this yet, possibly because we have difficulty distinguishing real and accidental complexity.

http://blog.jot.fm/2010/08/26/ten-things-i-hate-about-object-oriented-programming/ Clearly classes should be great. Our brain excels at classifying everything around us. So it seems natural to classify everything in OO programs too. However, in the real world, there are only objects. Classes exist only in our minds. Can you give me a single real-world example of class that is a true, physical entity? No, I didn’t think so. Now, here’s the problem. Have you ever considered why it is so much harder to understand OO programs than procedural ones? Well, in procedural programs procedures call other procedures. Procedural source code shows us … procedures calling other procedures. That’s nice and easy, isn’t it? In OO programs, objects send messages to other objects. OO source code shows us … classes inheriting from classes. Oops. There is a complete disconnect in OOP between the source code and the runtime entities. Our tools don’t help us because our IDEs show us classes, not objects. I think that’s probably why Smalltalkers like to program in the debugger. The debugger lets us get our hands on the running objects and program them directly. Here is my message for tool designers: please give us an IDE that shows us objects instead of classes!

As we have all learned, methods in good OO programs should be short and sweet. Lots of little methods are good for development, understanding, reuse, and so on. Well, what’s the problem with that? Well, consider that we actually spend more time reading OO code than writing it. This is what is known as productivity. Instead of spending many hours writing a lot of code to add some new functionality, we only have to write a few lines of code to get the new functionality in there, but we spend many hours trying to figure out which few lines of code to write! One of the reasons it takes us so long is that we spend much of our time bouncing back and forth between … lots of little methods. This is sometimes known as the Lost in Space syndrome. It has been reported since the early days of OOP. To quote Adele Goldberg, “In Smalltalk, everything happens somewhere else.”

Yolo
https://github.com/OznOg/yolo-reloaded C++ version making a readable C version from Yolo unreadable.

links
https://medium.com/@brianwill/object-oriented-programming-a-personal-disaster-1b044c2383ab