Oop

Nouns as dissimilar term for verbs
https://en.wikipedia.org/wiki/Design_Patterns uses pattern(noun) as a dissimilar term for some design or action(verb). This semantic fraud is mirrored by Stallman's "open source" as dissimilar term for "free". But free is clarified as public domain. GPL and BSD code is proprietary by copyright law.

“Gothic architecture”, “Georgian coat” and “Victorian corset” are nouns not verbs. Software design "patterns" lingua would have us speak of "opening the door pattern" or "pick up the fork pattern". These everyday activities aren't noun like patterns by action like verbs. The gang of four like the inventor of UML used Orwellian pretentious language to make millions of dollars out of the software industry in conferences and consulting gigs. "Understanding" pattern based design is groupthink, from the desire not be the stupid one out. University professor conmen capitalize on this, making Brassuan monkeys out of their students.

Academic fraud isn't limited to social studies and biology, nearly 80% of all biological experiments in journal papers can't be replicated. It seemingly didn't occur that the world of computer science is just as prone to fraud for the same reason: the pressure to produce papers. Apache server is written in OOP C++ allowing for an attack vector to steal company databases. Firing people at Equifax won't solve the problem. As with the NSA Systemd subversion of Linux, we can presume that they also have a hand in forcing OOP on the industry.

Procedural and functional should be the only paradigms. A problem is by definition problematic, it cannot be reduced. OOP is a scam foisted on programmers by academia, as it gave them a reason to complexify something understandable so they could produce endless papers. It also allows for a NSA Encryption attack vector, I2p is written in java for example. Nobody knows what is meant with oop. All programming is procedural, object in oop is a dissimilar term for message, message programming. This is adds a layer off Rube Goldberg contrivance to what remains procedural in nature. "Everything is an object" goes the slogan, but is everything really a message? Data is inert, we don't think of 'messages' when mapping data from the domain to the range with a mathematical function. Integers, float and double in C are objects, the behavior is encapsulated. An assembler function transforms inert data on reception of a message. Straight C can be used to emulate a higher level OOP like python. Python allows mixing of both procedural and message programming while Java doesn't. Because OOP isn't defined, it allows for the mistake that int,float etc. aren't message receptors or "objects" then.

An object is a message receptor. The object instance receives a message and calls a function which transforms the data or "fields". Alan Kay on message programming

Oop libraries are only a collection of functions. Inside classes they are called "methods" to obfuscate this, a semantic brew is stewed to try and make it seem that oop isn't a calvin hobsy way of procedural programming. You can spend two hours rewriting code in procedural or spend two hours looking for a function buried in some class. Having to rewrite code is inevitable and preferred over endless searching.

A python class is a complex wrapper around a dictionary - so, if anything, you are adding more overhead than by using a simple dictionary. A Python dictionary is internally implemented with a hash table. A hash table is a data structure that maps keys to values by taking the hash value of the key (by applying some hash function to it) and mapping that to a bucket where one or more values are stored. If you are working on performance critical applications, use C or something. https://stackoverflow.com/questions/35988/c-like-structures-in-python.

stop writing classes
https://www.youtube.com/watch?v=o9pEzgHorH0  Classes are great but they are also overused. This talk will describe examples of class overuse taken from real world code and refactor the unnecessary classes, exceptions, and modules out of them.

Steve Yegge
http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom-of-nouns.html Object Oriented Programming puts the Nouns first and foremost. Why would you go to such lengths to put one part of speech on a pedestal? Why should one kind of concept take precedence over another? It's not as if OOP has suddenly made verbs less important in the way we actually think. It's a strangely skewed perspective. As my friend Jacob Gabrielson once put it, advocating Object-Oriented Programming is like advocating Pants-Oriented Clothing.
 * https://plus.google.com/u/0/110981030061712822816/posts/KaSKeg4vQtz

It's easier for me to understand a system if it's written in a procedural language like C even without its documentation. But the same system written in Java is very hard for me to grasp without it's Doc and UML. When going to an unfamiliar place, for example, it's easier to follow the instruction that directs you how you can go there (e.g, go straight; at the 3rd street, turn left; cross the first pedestrian and you will see the train station, blah blah...) than finding objects of which state and behavior are unfamiliar to you (e.g, when you see a big blue rectangular building, it's the most famous shop, if it's not blue but is big, it's probably the hospital; neglect those shop and hospital because you must pay attention to the yellow building which is the restaurant. Continue roaming around the city until you see the shop, the hospital, and the restaurant which are beside each other; When you found the right restaurant and it is open, enter and eat some food there, otherwise, go home and cook your own food!).

oop failure
from https://www.quora.com/Was-object-oriented-programming-a-failure. The Catch-22 to the whole equation - and the part which corporate execs seem to understand the least - is that Object-Oriented is strictly an organizational paradigm. ALL actual software is strictly Procedural. No matter how the source code is laid out (wholly for the convenience of the programmer(s)), data is data and computers don't care if you're using OOP, Procedural.

Alexander Stepanov's complaint is blistering and accurate. If you read Types and Programming Languages, you get a sense for just how much complexity objects add to your world. OOP, as commonly envisioned, doesn't play well with static or dynamic typing.

Is OOP a failure? Well, what is it? I've heard OOP given about 12 definitions, all credible in some core way, but many conflicting. Like "Scrum", it's too all over the place to justify a closed-form, final opinion. It's either highly beneficial or loathsome depending on which interpretation one uses. There's good OOP and bad OOP. This should be no surprise: in the anti-intellectual world of mainstream business software, it's mostly bad OOP. (For "Scrum", there's the same sad story.)

Separation of implementation and interface is a clear win. That's not limited to OO languages, of course. Haskell has type classes, Clojure has protocols, and Ocaml has (if you're brave) functors. Nonetheless, I'm going to score that as a clear Good Idea that OOP championed early on.

For that, Alan Kay's inspiration was the biological cell. Alan Kay is one of the best software designers alive, and has been extremely critical of modern OOP. Now, the cell: it's an intricate, convoluted machine, almost on the verge of collapsing under the weight of its own complexity. In a larger organism, cells communicate through a simpler interface: chemical signals (hormones) and electric activations. If they coupled more tightly, the organism wouldn't be valuable. Kay was not saying, "you should go out and create enormously complex systems". OOP, to him, was about how to manage it when complexity emerged. In this way, OOP and functional programming (FP) were actually orthogonal (and could support one another) rather than in conflict. It was still desirable that objects do one thing and do it well; but interfaces were intended to underscore that "one thing" when the demands on the implementation made it hard to tell what that was.

OOP and FP (and, in reality, all higher-level languages) both exist to answer the question, "How do we prevent software entropy?" See, Alan Turing's result on the Halting Problem isn't about termination or about machines and tapes. It's the first of many theorems establishing the same thing: we can't reason, in any way whatsoever, about arbitrary code. It's mathematically impossible. Obvious solution: "don't write arbitrary code." (Most code that a person would write to solve a problem is in a low-entropy region where reasoning about code is possible.) Equally obviously, no one does write "arbitrary code". Generally, we don't go very far at all into that chaotic space of "all code", and that's good. However, as the number of hands that have passed over code increases, it gets further into that high-entropy/"arbitrary code" space. FP and OOP are two toolsets designed to prevent it from getting there too fast. FP enforces simplicity by forcing people to think about state and mutability, encouraging code that can be decomposed into "do one thing" components-- mostly mathematical functions. OOP tries to make software look like "the real world" as can be understood by an average person. (CheckingAccount extends Account extends HasBalance extends Object). The problem is that it encourages people to program before they think, and it allows software to be created that mostly works but no one knows why it does. OOP places high demands on the creators of the machinery (in effect, a new DSL) that will be built to solve a problem. Because of the high demands OOP places on human care of the software, the historical solution has been to have elite programmers (architects!) design and peons implement; that never worked out for a number of reasons-- it's hard to separate capability from political success, the best programmers don't want to be around with lines and boxes and DDL, business requirements are still a constant source of increasing complexity (with outdated or unwanted requirements never retracted).

What went wrong? People rushed to use the complex stuff (see: inheritance, especially multiple) when it wasn't necessary, and often with a poor understanding of the fundamentals. Bureaucratic entropy and requirement creep (it is rare that requirements are subtracted, even if the original stakeholders lose interest) became codified in ill-conceived software systems. Worst of all, over-complex systems became a great way for careerist engineers (and architects!) to gain "production experience" with the latest buzzwords and "design patterns". With all the C++/Java corner-cases and OO nightmares that come up in interview questions, it's actually quite reasonable that a number of less-skilled developers would get the idea that they need to start doing some of that stuff (so they can answer those questions!) to advance into the big leagues.

procedural is oop
https://www.quora.com/Was-object-oriented-programming-a-failure Okay, ready to drop the bomb? Everything is an object-like in your favourite non-OOP language. C ints are objects, and I do not mean in the Java sense. They encapsulate the underlying binary. They have well-defined behaviour, namely arithmetics. You have that polymorphism stack of short, long, long long leading to that devilish char.

Haskel functions are objects, and I do not mean in the Java sense. They encapsulate the underlying algorithm. They have well-defined behaviour, namely being callable. OOP and FP are just two representations of data and operations. It is like two sides of a fourier transformation. One is ugly, the other is sleek. Which is which depends entirely on the issue at hand. So let me answer the actual question: OOP is not a failure. FP may be more attractive to an algorithm designer, but I have a truckload of tasks that are stupidly difficult to express functionally. What is a failure is thinking strictly in one category or another. Some tasks require OOP, some require FP, but most can be expressed either way. Whether either or the other is good idea depends on the use case…

1.5k Views · 5 Upvotes
 * Without loss of generality, FP just makes the analogy nicer. The argument would work for composition models and other stuff. It is all binary anyways.

cat

 * http://nuthole.com/blog/2004/02/05/musings-on-an-interview-with-alex-stepanov/ STL is not OOP.
 * http://harmful.cat-v.org/software/OO_programming/why_oo_sucks "Objects bind functions and data structures together in indivisible units. I think this is a fundamental error since functions and data structures belong in totally different worlds. If a language technology is so bad that it creates a new industry to solve problems of its own making then it must be a good idea for the guys who want to make money. This is is the real driving force behind OOPs."
 * http://www.smashcompany.com/technology/object-oriented-programming-is-an-expensive-disaster-which-must-end
 * https://whydoesitsuck.com/cpp-sucks-for-a-reason/ In my opinion, C++ is this weird Frankenstein velociraptor that somehow survived the dark ages of programming and is now constantly being revived and patched up. The members of the standardization committee are trying hard to make things a little better by applying tons of makeup over its wrinkles. Obviously, this doesn’t work out too well since it’s still the same ugly beast under the mask. The problem here is that C++ is old – and I mean the antique kind of old that deserves to be put into retirement.

Torvalds
> > As it is right now, it's too hard to see the high-level logic thru > this endless-busy-work of micro-managing strings and memory.
 * https://lwn.net/Articles/249460/ Torvalds on C++ mess. So I'm sorry, but for something like git, where efficiency was a primary objective, the "advantages" of C++ is just a huge mistake. The fact that we also off people who cannot see that is just a big additional advantage. If you want a VCS that is written in C++, go play with Monotone. Really. They use a "real database". They use "nice object-oriented libraries". They use "nice C++ abstractions". And quite frankly, as a result of all these design decisions that sound so appealing to some CS people, the end result is a horrible and unmaintainable mess. But I'm sure you'd like it more than git.
 * https://web.archive.org/web/20080304231021/http://article.gmane.org/gmane.comp.version-control.git/57961 On Thu, 6 Sep 2007, Dmitry Kakurin wrote:

The string/memory management is not at all relevant. Look at the code (I bet you didn't). This isn't the important, or complex part.


 * > IMHO Git has a brilliant high-level design (object database, using > hashes, simple and accessible storage for data and metadata). Kudos to > you! > The implementation: a mixture of C and shell scripts, command line > interface that has evolved bottom-up is so-so.

The only really important part is the *design*. The fact that some of it is in a "prototyping language" is exactly because it wasn't the core parts, and it's slowly getting replaced. C++ would in *no* way have been able to replace the shell scripts or perl parts.

And C++ would in no way have made the truly core parts better.

> > and comparing C to assembler just shows that you don't have a friggin idea > > about what you're talking about. > > I don't see myself comparing assembler to C anywhere.

You made a very clear "assembler -> C -> C++/C#" progression nin your life, comparing my staying with C as a "dinosaur", as if it was some inescapable evolution towards a better/more modern language.

With zero basis for it, since in many ways C is much superior to C++ (and even more so C#) in both its portability and in its availability of interfaces and low-level support.

> I was pointing out that I've been programming in different languages > (many more actually) and observed bad developers writing bad code in > all of them. So this quality "bad developer" is actually > language-agnostic :-).

You can write bad code in any language. However, some languages, and especially some *mental* baggages that go with them are bad.

The very fact that you come in as a newbie, point to some absolutely original author doesn't like, is a sign of you being a person who should be disabused on any idiotic notions as soon as possible.
 * trivial* patches, and use that as an argument for a language that the

The things that actually *matter* for core git code is things like writing your own object allocator to make the footprint be as small as possible in order to be able to keep track of object flags for a million objects efficiently. It's writing a parser for the tree objects that is basically fairly optimal, because there *is* no abstraction. Absolutely all of it is at the raw memory byte level.

Can those kinds of things be written in other languages than C? Sure. But they can *not* be written by people who think the "high-level" capabilities of C++ string handling somehow matter.

The fact is, that is *exactly* the kinds of things that C excels at. Not just as a language, but as a required *mentality*. One of the great strengths of C is that it doesn't make you think of your program as anything high-level. It's what makes you apparently prefer other languages, but the thing is, '''from a git standpoint, "high level" is exactly the wrong thing. (Linus)'''

rebol
http://www.rebol.com/article/0425.html In its purest form, OO is a model of associating behavior with state (function with data). Originally, back in 1982, it seemed like a good idea because real world objects had specific actions related to them. A pen was used to write and draw. A pencil was used to write and draw. We thought, "Wow, there's a pattern, and it seems to be quite natural." However, it was a false model. A pen does not write and draw, it takes a human to make a pen write and draw. The actions of write and draw do not belong to the pen. OOL is not a complete solution. Too many of the behaviors of objects come from (or are influenced by) sources that are external to their encapsulated definitions.

Yegor256
http://www.yegor256.com/2016/08/15/what-is-wrong-object-oriented-programming.html Edsger W. Dijkstra (1989) "TUG LINES," Issue 32, August 1989 "Object oriented programs are offered as alternatives to correct ones" and "Object-oriented programming is an exceptionally bad idea which could only have originated in California."


 * Paul Graham (2003) http://www.paulgraham.com/hundred.html The Hundred-Year Language "Object-oriented programming offers a sustainable way to write spaghetti code."


 * http://www.yegor256.com/2016/07/14/who-is-object.html Who is an object? What is common throughout all these definitions is the word "contains" (or "holds," "consists," "has," etc.). They all think that an object is a box with data. And this perspective is exactly what I'm strongly against.

Hyperplanes
,, http://blog.berniesumption.com/software/inheritance-is-evil-and-must-be-destroyed/ ,  http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay_oop_en  ,   http://axilmar.blogspot.com/2014/10/object-oriented-programming-is-disaster.html rebuttal https://en.wikipedia.org/wiki/Object-relational_impedance_mismatch
 * http://www.separatinghyperplanes.com/2014/10/on-object-oriented-programming.html
 * all evidence points to oop being a disaster links to http://wiki.c2.com/?ArgumentsAgainstOop
 * https://blog.codinghorror.com/rethinking-design-patterns/
 * https://www.youtube.com/watch?v=RdE-d_EhzmA David West
 * https://news.ycombinator.com/item?id=3641212
 * http://www.smashcompany.com/technology/object-oriented-programming-is-an-expensive-disaster-which-must-end
 * http://lucacardelli.name/Papers/BadPropertiesOfOO.html
 * http://web.archive.org/web/20080710144930/http://gagne.homedns.org:80/~tgagne/contrib/EarlyHistoryST.html

http://www.shenlanguage.org/

http://whiley.org/2010/06/23/rich-hickey-on-clojure-se-radio/

https://8thlight.com/blog/colin-jones/2012/06/05/on-obsessions-primitive-and-otherwise.html

Paul Graham
http://www.paulgraham.com/avg.html

http://www.paulgraham.com/hundred.html Somehow the idea of reusability got attached to object-oriented programming in the 1980s, and no amount of evidence to the contrary seems to be able to shake it free. But although some object-oriented software is reusable, what makes it reusable is its bottom-upness, not its object-orientedness. Consider libraries: they're reusable because they're language, whether they're written in an object-oriented style or not. http://queue.acm.org/blogposting.cfm?id=34658

http://www.paulgraham.com/noop.html, http://www.paulgraham.com/reesoo.html

http://harmful.cat-v.org/software/OO_programming/why_oo_sucks

http://www.artima.com/weblogs/viewpost.jsp?thread=141312

http://batsov.com/articles/2011/05/12/jvm-langs-clojure/

Lost in Space syndrom

 * OOP is about taming complexity through modeling, but we have not mastered this yet, possibly because we have difficulty distinguishing real and accidental complexity.

http://blog.jot.fm/2010/08/26/ten-things-i-hate-about-object-oriented-programming/ Clearly classes should be great. Our brain excels at classifying everything around us. So it seems natural to classify everything in OO programs too. However, in the real world, there are only objects. Classes exist only in our minds. Can you give me a single real-world example of class that is a true, physical entity? No, I didn’t think so. Now, here’s the problem. Have you ever considered why it is so much harder to understand OO programs than procedural ones? Well, in procedural programs procedures call other procedures. Procedural source code shows us … procedures calling other procedures. That’s nice and easy, isn’t it? In OO programs, objects send messages to other objects. OO source code shows us … classes inheriting from classes. Oops. There is a complete disconnect in OOP between the source code and the runtime entities. Our tools don’t help us because our IDEs show us classes, not objects. I think that’s probably why Smalltalkers like to program in the debugger. The debugger lets us get our hands on the running objects and program them directly. Here is my message for tool designers: please give us an IDE that shows us objects instead of classes!

As we have all learned, methods in good OO programs should be short and sweet. Lots of little methods are good for development, understanding, reuse, and so on. Well, what’s the problem with that? Well, consider that we actually spend more time reading OO code than writing it. This is what is known as productivity. Instead of spending many hours writing a lot of code to add some new functionality, we only have to write a few lines of code to get the new functionality in there, but we spend many hours trying to figure out which few lines of code to write! One of the reasons it takes us so long is that we spend much of our time bouncing back and forth between … lots of little methods. This is sometimes known as the Lost in Space syndrome. It has been reported since the early days of OOP. To quote Adele Goldberg, “In Smalltalk, everything happens somewhere else.”

Yolo
https://github.com/OznOg/yolo-reloaded C++ version making a readable C version from Yolo unreadable.

ccp conf(Dave Acton)
https://www.youtube.com/watch?v=rX0ItVEVjHc The transformation of data is the only purpose of any program. Common approaches in C++ which are antithetical to this goal will be presented in the context of a performance-critical domain (console game development). Additionally, limitations inherent in any C++ compiler and how that affects the practical use of the language when transforming that data will be demonstrated. linked from http://altdevblog.com/2012/09/16/q-003-what-is-one-mistake-you-made-recently/
 * There is no ideal abstract solution to the problem. (25min)
 * You cannot future proof code.
 * Code doesn't model the real world.
 * Software isn't a platform.
 * Code cannot be designed around a model of the world.
 * Code isn't more important than data.
 * We must solve the 90% of problem space(L2,1,3 memory cache) that the compiler can't. The idea isn't to miss the L2 cache event. OOP consumes 90% of the L2 cache memory as opposed to straight C which used much less. The OOP c++ example written in C is debugable, maintainable and we can reason about the cost of change. OOP ignores the finite limit of the cache, this is irrational.

http://realtimecollisiondetection.net/blog/?p=81 Some anonymous soul emailed me regarding my “Design patterns are from hell!” post, arguing that “somehow, knowing patterns exist is the same as knowing different data structures exist” and that “understanding the different ways for creating objects (hello creational patterns) is like understanding the implications of deciding to use a dequeue rather than an array or rather than a linked list.”

I was also bravely asked what I thought about these statements. Well, guess what, since one can never diss design patterns enough, this is what I think… They’re from hell!

No, there are no similarities between data structures and algorithms on one side and design patterns on the other side! Rather, there are lots of distinctions but the perhaps most important one is that data structures and algorithms are language independent whereas design patterns are language dependent. Data structures and algorithms are forever, whereas design patterns are as fleeting as the object-oriented languages for which they have been (predominantly) proposed. (That fact alone should warrant little to no attention being paid to design patterns. And if you don’t understand why OO is fleeting, time to learn a second language, other than C++.)

A second important distinction is that data structures and algorithms do not come encumbered with preferred usages. They just are. A programmer has to make deliberate choices — has to think — before selecting one over the other. Thinking is what makes, or breaks, the programmer.

In contrast, design patterns are purported “master programmer advice” strongly suggesting to young or otherwise impressionable programmers that the design patterns convey important concepts, practices, or principles that have been “prethought.” However, design patterns are not “master programmer advice!” Any master programmer would know that you cannot simply dish out a small number of generic solutions for generic situations, but that every situation is (potentially) different and warrants its own (potential) solution.

Far from “master programmers,” design patterns are the work of people who do conferences, talks, and books for a living, to further their own cause; they’re the work of academics who live in their heads and have never worked on real projects to see what kind of code their abstract ideas produce when put in practice; they’re the work of people who couldn’t care less about what toxic miasma they have unleashed because they’re too busy speaking at Software Development to push their consulting gigs to the fools who bought into the snake o

Design patterns are spoonfeed material for brainless programmers incapable of independent thought, who will be resolved to producing code as mediocre as the design patterns they use to create it.

The problem isn’t that knowledge of patterns is completely useless and programmers are much better off spending time learning useful knowledge like data structures and algorithms, even though that’s a true statement as far as I’m concerned. The problem is that patterns are as bad as, well, guns. Guns kill people, and pattern thinking causes brain rot.

There’s ton of people who incorrectly think and propagate that patterns are master programmer advice when they really are over-engineered solutions for deficiencies of object-oriented programming languages. I don’t like over-engineered solutions and I don’t like object-oriented programming languages, so fleeting terminology for stuff like that is not something I’m likely to promote any time.

I realize reading the mindless drivel of Design Patterns might give some programmers instant satisfaction because everyone talks about patterns and they now feel smarter having read about them, but in reality these programmers would be better off beating their heads against Knuth because that will actually make them smarter, not just feel like they are. (Knuth is a hard read, but avoiding solving hard problems isn’t the way of becoming a master programmer any more than is studying Design Patterns. If Knuth is too much, read Skiena’s book.)

I mean, if someone thinks they are a better programmer for knowing the “visitor” and “observer” patterns, but they don’t know, say, what a skiplist is, how to perform a k-nearest neighbor search, or how to apply dynamic programming to a problem then they’re fools.

And, no, I’m not overreacting. :) I’m just making sure my point comes across loud and clear, because there needs to be a lot of shouting to counterbalance all the published fraud about patterns. I was hoping the problem would just go away, but as it isn’t, I’ll use my little soapbox to make it painfully clear that I, for one, think they’re a scourge of programming/design.

It goes without saying that terminology is important, but relevant and precise terminology arises natually, from a need. “Callback”, for example, never had this cult or passionate discourse about its being, because that term arouse naturally. But there was never a natural need for labeling of encapculated global variables as “singletons” or other similarly trivial concepts as “visitor”, “observer”, or what-have-you patterns. These patterns are artificial concepts.

aThirdParty, Twylite’s comment was feeble-minded for the following reasons: 1. He suggests that all nouns are useful. But this is trivially a false statement as once we reach as many nouns as there are concepts, the nouns have lost their abstractive power and have become worthless. 2. He correctly identifies that design patterns have lengthy definitions and a limited domain, but fails to note that they are, in fact, much more limited than so; they are so specific to a particular development methodology and a particular language that they effectively have no expressive power beyond those. 3. Worse, he fails to read the actual message, not seeing — even though it is plain to see in just about every of my comments — that it is not an issue about the descriptive power of “pattern nouns” but one of leading a whole generation of programmers astray thinking that patterns are important when they so are not. To prove his total ignorance of my point, he still posts about the descriptive powers of nouns (and even so, gets it wrong, as per points 1 and 2 above).

In other words: his comment has less relevancy to my post than Palin’s statement about Putin’s head floating in Alaskan airspace had to Couric’s question. That you did not see this feeble-mindedness in his comment I find disappointing. You are right on one thing though, I have little compassion for feeble thinking. I find my compassion is better expressed as donations to ACS than as trying to spell things out, as I did in this post, only to find e.g. that some still don’t get a simple reductio ad absurdum argument (point number one, above).

christer said, May 7, 2009 @ 12:56 am I really have no intention of replying further here, because those who haven’t gotten my message already never will. That said, some final (late) comments:

unwesen, on the silliness that is your “[design patterns are] no more than a name for an approach to pounding nails into a board” analogy. Ask yourself, do carpenters have different names for “approaches to pounding nails into a board”? (Go ask a carpenter. No, really, go do it!) Of course not! Unlike programmers, carpenters are intelligent people, and they wouldn’t even dream of doing something as moronic as assigning a name to nail pounding! Kenneth, these are all perfect examples of academic questions that have little practical value beyond getting someone a thesis. These questions and attempts at answering them do not belong on my blog. This blog is for discussing real-world issues. Greg, “language dependent” is not a singular but a plural reference to languages. And, yes, I’m 100% serious, modulo the five somewhat randomly picked subjects of the sentence.

christer said, March 16, 2010 @ 3:55 amI see you’re missing the point.

In carpentry, there is no group of “carpentry masters” who holds classes telling carpenters to use the ‘new’ “glue pattern” while poo-pooing the old way of gluing. There is no group who tries to tell other carpenters what to do or what to call it. Carpentry nomenclature occurs naturally, on the job, not from some club of theory-only carpenters selling expensive coursework and books. Indeed, it appears all other disciplines are quite sane, and it is only in software development where we have enough feeble-minded people that we have been taken in by a bunch of snake-oil salesmen selling made-up, out-of-the-blue, nonsense terminology like, say, “flyweight pattern.”

People thinking any forced pattern-name is important have been bamboozled. They have been fooled, just like people believing in the value of homeopathy, astrology, phrenology, or navel-gazing have been fooled. Just like I’m sad to see people get harmed by using homeopathic “medicine” I’m greatly saddened to see software developers harming software and their profession by applying “pattern-names” and, worse, “pattern-thinking.”

awood said, March 29, 2010 @ 9:01 pm This has got to be a cool place for a discussion that started 2-3 years ago to still be alive! I’ve wondered about this thread based on my own experience and I can’t help but agree with Christer. My first instinct was to wonder how labeling algorithms or data structures is any different than labeling patterns, but algorithms and data structures are much, much more concrete and applying them has very measurable results. And while this may seem harsh and overly judgmental, in my experience, the best programmers/engineers are the ones that think in terms of data and algorithms. Those are also often the best architects, because they truly understsand what encapsulation means or they otherwise would not be able to seperate data and algorithms (and I don’t consider encapsulation to be an OOP-only concept, I say this because Christer is obviously not an OOP fan!).

By contrast, programmers that litter the code with pattern usage are usually the ones that cause the most trouble. They absolutely have to label whatever code they create with the pattern they picked from the book. Those are the programmers that love to explain how an Adapter is different than a Facade or a Decorator, etc, and who include the pattern names in whatever new classes they’ve created instead of just trying to name a class with something that captures its purpose. They also think that labeling something with a pattern makes it clean, and that it’s okay for WhateverAIObjectFactory to be known by the entire codebase because it’s recognized pattern. They go through great effort to systematically incorporate pattern names in their language and are completely unaware of the “real” problems other engineers are solving daily to make the game fast and ready for ship. To me, those guys are trouble. Patterns exist, and whether they need to be labeled or not can be debated. All I can say is, when I see very explicit pattern usage, my alarms trigger and I pay special attention! And that’s a reflex that has been burnt in me over time.

Oh, just a minor note. One thing to that keeps coming up whenever subjects such as over-engineering, pattern bashing, etc, are in question, is the architecture vs performance/hardcore programmer comparision. I don’t know why, but there is this assumption that you either architect code well or make it fast, and that if it’s fast it must be unmaintainable (one of the earlier comments touched on that). I don’t know why that is. I am in favor of both architecture (not OOP style though, way more in the way of DOD) and obviously speed, but I consider them orthogonal problems. It is actually easier to optimize parts of the code when that code is well isolated, sticks to solving one thing, etc.

christer said, June 25, 2010 @ 11:12 pm Matt, there’s a big difference between the word “pattern” as it occurs in a dictionary and the word “pattern” as it is used in design-pattern contexts. It doesn’t seem you see the distinction as you claim we all develop patterns and that tricks are patterns. I couldn’t disagree more.

Do you call everyday life commonalities for “xxx pattern”, like you obviously do for a programming “design pattern?” Like, say, the “opening pattern” which can be applied to car doors, cans, and caps. I doubt you do. In fact, I doubt you label any commonalities outside of programming a pattern (in the “design pattern”-usage sense) even though you clearly could. Ask yourself why is what? No, really. Apply the opening pattern to your mind and consider deeply why we don’t see people talk about “design patterns” in carpentry (the “v-claw pattern”) or mathematics (the “substitution pattern”) or any other field and then draw the obvious conclusion.

And of course I’m overstating! The message would be muddled if I said that something is 98% bad. The message remains the same though: pattern terminology and usage thereof rots the brain. BTW, data structures have nothing to do with design patterns, and vice versa. Why mix them? Data structures (like algorithms) are language independent, design patterns are language dependent. The former (two) is timeless knowledge. The latter is perishable knowledge (with a best-before date of 1994)!

phomer
phomer said, from http://realtimecollisiondetection.net/blog/?p=81 September 24, 2008 @ 1:04 pm

Way before design patterns, I used to keep a toolbox of interesting and effective ways to solve specific problems, which I called “mechanisms”. It was just an informal, personal collection of things that I knew worked; most other programmers I’d worked with back then were similar.

It is, in all fairness, the underlying idea behind quotes like “don’t reinvent the wheel”. For simple common problems it is faster to start somewhere known, and then work your way into the full solution. I don’t need to reinvent the concept behind the wheel, but I will need to tailor a specific instance of that idea to my current needs.

In so long as ‘patterns’ are really used as such, they form very good places to start. They have actually existed in some form or another, long before more specific abstractions like data-structures, but always at an informal unorganized level. We all tend to reuse our earlier solutions towards problems that we see over and over again, and if you program long enough, just about everything you see now will come to the surface again at some point. Patterns (before they went off course) were just a simple formalization of that existing process. If we can’t learn from our previous efforts, then we are doomed to rewrite the same code (and the same bugs) over and over again.

Paul http://theprogrammersparadox.blogspot.com

Brian Will

 * https://www.youtube.com/watch?v=lbXsrHGhBAU OOP is privileging data over action. It attempts to solve a problem by decomposing it into a bunch of data types. Procedural decomposes a problem into a series of actions(functions).

https://medium.com/@brianwill/object-oriented-programming-a-personal-disaster-1b044c2383ab by Brian Will 1. Encapsulation doesn't protect state coherence without huge structural burdens. In practice, real OOP codebases rarely achieve real encapsulation of partial program state, let alone the entire program state. 2. Most behaviors have no natural primary association with any particular data type. Consequently, object decomposition of application logic almost always produces unnatural associations of behavior and data as well as producing extra, unnatural data types and behaviors we otherwise wouldn't need.

As Casey talked about, stand-alone 'objects' are a perfectly fine concept, e.g. ADT's are natural objects (data manipulated only through a defined interface). But trying to shove everything into an object mold produces Frankenstein entities with superfluous layers of abstraction.

I say all this as someone generally comfortable with high levels of performance overhead. In my experience, OOP adds complications which overwhelm the expressiveness gains of higher-level code. Linked from oop is ineffective handmade.network

wikipedia
https://en.wikipedia.org/wiki/Circle-ellipse_problem

links

 * http://www.dataorienteddesign.com/dodmain/node17.html Second chapter book on why oop doesn't work.
 * https://mollyrocket.com/casey/stream_0019.html I always begin by just typing out exactly what I want to happen in each specific case, without any regard to “correctness” or “abstraction” or any other buzzword, and I get that working. Then, when I find myself doing the same thing a second time somewhere else, that is when I pull out the reusable portion and share it, effectively “compressing” the code. I like “compress” better as an analogy, because it means something useful, as opposed to the often-used “abstracting”, which doesn’t really imply anything useful. Who cares if code is abstract? Linked from http://www.mikedrivendevelopment.com/2014/06/compression-driven-development.html