Object Oriented Programming provides a different view of programming than Procedural Programming or Functional Programming. On the one hand it's about bundling the state and behavior together, but in a larger sense it's about a mindset--a way of looking at programming.
Below is a list of benefits often assigned to or claimed about OO. If you know of any evidence or detailed argument for any of these, feel free to link them in. Please use the discussion area below the list for long content.
Reuse. (But see Reuse Has Failed.)
Configurable Modularity at Runtime (improved over most procedural languages, which offer Configurable Modularity only at code-time)
Poly Morphism increases expressiveness.
Better models the real world (or at least better matches a useful model of the real world)
Better models human thought processes. Hotly contended because heavily tied to psychology theories and personal differences.
There is a fallacy involved in this assertion. Humans may abstract and model world as objects (soft reflective abstraction), but OOP constructs world via objects (hard projective abstraction). Those are opposite behaviors, but the surface similarities may lead one to believe that OOP better models human thought. See also: Object Vs Model, Oop Not For Domain Modeling.
Basically, humans easily hold multiple views of things - recognizing multiple patterns - whereas OOP often forces us to pick one view and entrench it in our code.
I'd be hesitant to make blanket statements about human Wet Ware outside of my own. People are different. Some are more comfortable with Everything Isa.
Looking at "comfort" level is a terrible way to measure whether something is related to "human thought processes" because it depends heavily upon other factors, such as familiarity. What the body of hard psychological evidence tells us is that all people (barring genetic defect, brain damage, or psychoactive drugs) perform the same cognitive functions, learn and think and recognize and pattern match by the same mechanisms. Humans are not that different.
Besides that, you probably are not qualified to make statements about even your own Wet Ware. Very few people keep careful tabs and make objective judgements about their own behavior and thought patterns. If you think you're one of the few exceptions, you are almost certainly deluding yourself (which is a rather common pattern in human behavior).
Compared to others, I believe I am pretty good at explaining my mental steps to come to a judgment or model if I think about it enough. This is with the exception of remembering specific events that add up to summary judgments about frequencies. I don't have a photographic memory. But perhaps we should take this discussion to another topic.
You should look up the term 'confabulation', learn what it means, and apply it to your daily life.
Do you mean my claims to about my ability to turn my mental models into something clear or concrete, or the accuracy of these models as far as predicting activity in the real world? The sub-topic is modeling the human mind, not modeling the "actual" world accurately. It's merely a claim of an ability at introspection of thought processes sufficient enough to replicate much of it on paper or an algorithm.
I'm calling you human. Humans Are Lousy At Self Evaluation. Confabulation is also common; I do it all the time, but I'm aware of the possibility so I can correct for it by establishing processes and habits. The problem is, human, that you're super-arrogant and think yourself above these issues. Regarding the rest, here's a piece of common sense that you apparently failed to learn: If one person calls you an ass, that's his problem. If many people call you an ass, that's your problem.
I see a lot of projecting going on there. And because the person/people who personally insult me rarely use handles, it could be one or a hundred and so I cannot make any reliable assessment of popularity. Further, argument-by-popularity is generally a weak argument. You can't make the world flat by voting it flat. Enough about how much we hate each other. Let's get back to the topic.
Perhaps I over-personalized your statement. Anyhow, the interaction between programming paradigms and the human mind is a complex and difficult-to-test subject. For more, please see Oop And Human Thought Process.
Reduces the impact of requirement changes on code.
Delta Isolation (group by differences)
I'm curious about this claim - Delta Isolation of what? For data, the 'best' I've seen is Functional Reactive Programming.
Sub-classing can potentially allow one to take an existing class (model) and only implement the differences from the "parent" class. A typical "training" example: A toad is like a frog except for these differences: "overridden" behavior.
Makes software simpler (less code?) - (Next several are my expansions on this)
Provides a reasonable set of organizational rules for software. And other paradigms don't? | Not necessarily... but it general, that's correct {???}
Reduces method length.
Reduces coupling (Measuring Coupling, see Coupling And Cohesion)
This one by no means happens automatically! It takes a very good Dependency Injection framework, such as polymorphic constructors (related: New Considered Harmful). In most OOP languages, reducing coupling takes a great deal of Self Discipline, which ranks this rather low in the Four Levels Of Feature. (That said, "reduces coupling" is relative. By comparison, ye'old imperative languages like C (Cee Is Not The Pinnacle Of Procedural) rank down nearabouts Turing Tarpit for reducing coupling.)
Increases cohesion (Measuring Cohesion)
Drives certain dogmatic Relational Weenies nuts when it actually solves the problem at hand. Even the lamest of paradigms sometimes work.
Provides better source code structure than procedural programming. Please elaborate on "better".
If you only use the OO model, you can't do anything wrong with the internal state, so claims a statement in Database Not More Global Than Classes.
Oo Makes Testing Easier - Discussion below
Encapsulation provides method (function) name-space management features that other paradigms either don't have, or is language-specific within those paradigms.
Provides one approach to building custom (domain-defined) mini-languages that may rely heavily on nested expressions without having to extend the application language itself. See Expression Api Complaints for examples.
Edit Hint: It would be good to list how you can achieve these benefits using OO. Certainly you don't get beneficial effects magically, but by doing good work with the technique.
It has resisted "external" analysis. It seems Programming Is In The Mind and the benefits may depend on the individual doing the thinking about the program.
Edit Hint: There's still plenty of material to mine from Benefits Of Oo Original Discussion
Here is an "anecdotes are good enough" viewpoint:
Those of us who criticize OOP do not necessarily disagree with personal or subjective benefits; it is the extrapolation assumption that they are universal or "best practices" which is the real problem. The article also suggests that OO better models the real world, but this does not seem to be a universal primary claim even among OO proponents. [insert link to existing real-world debate when found]
OO is good because it makes testing easier.
Example?
You can often test each class in isolation.
Same with modules.
[I'd go further and say that it's no more true for classes than for other units of code organization. Classes that don't depend on any others are in the minority, and often not very interesting.]
False for any kind of modular language I can think of (C, which lacks a formal model of modules, still falls into this category, but so does Oberon(-2). Not sure about Modula-2, but I'm pretty sure it also suffers the same problems). In my OpenAX.25 C project, I was unable to test a module that used the BSD sockets API because, obviously, it would attempt to link against the BSD sockets library. I couldn't create my own module which defined the sockets API myself because of duplicate symbol definition errors. I had to resort to creating a thunk module called IPC, that defined IPC_recv(), IPC_select(), etc. These functions added a layer of indirection which allowed my test code to not only isolate the module under test, but also alter the implementations of the respective IPC functions depending on the expected state of the application (State Pattern). This made unit testing much easier, but at the rather significant cost of many lines of pointless code.
In an object oriented (or, more accurately, polymorphically-oriented, which OO definitely is), this problem simply doesn't arise. I can mock/stub out the IPC objects as I see fit in my unit test code trivially.
To get this same convenience in a modular language, your language simply must implement run-time-dispatched generic functions at the very least. Only CLOS implements this to the best of my knowledge. But, then again, it isn't called the Common Lisp Object System for nothing.
In languages that support Hyper Static Global Environments, such as Forth, you can "fake" run-time rebinding of module interfaces by simply loading/compiling the mock/stub module before compiling the tests. This wastes a bit of RAM, but since the tests occupy RAM only transiently, it shouldn't prove to be a burden. The only thing that would happen is it'll somewhat lengthen the time it takes to compile and run the unit tests.
This seems a very language-specific thing. Cee Is Not The Pinnacle Of Procedural. I agree that introducing OO into a language probably gives one more name-space management options, but this is simply because the more paradigms you have, the more options you have. But, paradigm potpourri has its own downsides.
Whether C is a pinnacle or not is utterly irrelavent (please re-read where I also said the problem applies to every other modular language I am personally aware of, including Oberon, Python, Haskell, etc.) Show me even one modular programming language that exercises late-binding at the module level. It can't be done; as soon as you do, you all but by definition have an object system. --Samuel Falvo
I first need to see if you are claiming that non-OO paradigms will *always* prevent the implementation of such a feature, or merely that it is not common in practice. For example, very few procedural languages I've encountered support nested subroutines (along with nested scope). Only Pascal. But, this is not a fault of the procedural paradigm. It is some kind of industry habit formed by who-knows-what, for I found nested routines and scope to be useful.
Let me attempt to explain it via a bullet list to document my thought pattern. Maybe then my reasoning will become clear.
I think we can agree that entities which we call "modules" are statically linked units of compilation at run-time.
Although they may be dynamically loaded, their linkage occurs statically at the assembly language level once the loader performs address fixups.
It is possible to have two modules Y and Z such that they expose exactly the same interface.
It is not possible for a module X which uses module Y's interface to, for the purposes of unit testing, suddenly use module Z, without creating a whole new executable that is statically linked and loaded.
The reason you need statically loaded binaries here is because without it, the system's loader may still inadvertently load module Y.
When running unit tests, many different kinds of mocks may be necessary to thoroughly test module-level interfaces.
Mocking the BSD sockets library for testing a new server daemon's ability to handle various kinds of malicious input, for example.
Each kind of malicious input represents a different set of state machines; hence a State Pattern is strongly desirable here.
However, the module under test has no idea that you're trying to mock sockets.
Therefore, there are only two solutions to follow:
make a new module W, which serves as an abstraction of Y and Z, such that:
production logic invokes a function in W that configures its interface to respond to module Y.
test logic invokes a function in W that configures its interface to respond to module Z.
Module W must use function pointers or their equivalent to implement the redirection.
Thus, module W is an instance of the Bridge Pattern, but at the module level.
leave your module alone, don't bother with W at all, and turn what should be a unit test into an integration/acceptance test instead.
This is an Impedance Mismatch.
This can cause problems later on if tests are found to take up too much time.
This tends to be more difficult than necessary to automate.
A module, by all definitions currently known to me, is a static entity: it is, from the point of view of the CPU actually doing the job of running the code, just a chunk of code. CALLs made to it are done with absolute, or at the very least, jump-tables with absolute, addresses. Everything else, such as ensuring the module's interface matches your expectations prior to compiling, is a compiler-offered feature that has no concrete run-time representation, and even if it did (e.g., in the form of in-core type descriptors), it'd have no bearing on actual execution of code. Once the binary image is loaded into memory, it's fixed.
As soon as you say, "OK, this is stupid, let's add support for polymorphic module interfaces," well, you've basically re-invented object oriented architecture. More precisely, you've reinvented interfaces, a feature that basically turns your module into a kind of class, because now you've introduced the ability to have multiple concurrent instances of your "module" co-resident with each other, each offering their services to objects that they own.
Languages which I know for a fact require the "modules are static entities" invariant to hold:
Haskell
Oberon(-2)
Modula-2
Python (without resorting to unscrupulous manipulation of internal structures)
C/C++
Forth
OCaml
Java (Yes, a class file is also a module file; "module-scope" entities are things like static class methods and the like.)
Languages where this is not the case:
Common Lisp (you can redefine any existing procedure, or use (defmethod) to create a wrapper around existing procedures; this appears to be a practice that is strongly discouraged though)
Scheme (similar to CL, you can redefine a global name to mean something else)
I just can't think of any other languages in mainstream use that offers comparable capabilities, from the sole point of view of unit testing alone, as what OO provides.
If someone can prove me wrong, I'd love to hear from you. :) --Samuel Falvo
I'll have to think about this more from a compiler/interpreter perspective. But generally even IF this was an inherent flaw of procedural, one still has two options:
Wrap the low-level services with your own routines with a name that does not overlap with the built-in libraries.
Use the low-level service itself to test. In other words, rather than make a dummy socket, use a *real* socket to a "live" test service. This may be a better test anyhow because you are using the real deal.
Not if you are unit testing, the whole purpose of which is to test code in isolation. To do this, you have zero choices available: write a wrapper around the module in question, which is precisely what I did for the sockets API. But, now, the code is linked against the wrapper permanently, even in shipping code. Thankfully, the indirection doesn't add noticable overhead.
It may all depend on how flexible the name-space management is in a given language. I see nothing that inherently prevents a substitute of API mapping for unit tests in say procedural or functional programming even if specific languages suffer from related problems. However, I suppose you could argue that having OOP at least supplies one guaranteed way to do it: polymorphically. But this makes me want to focus on features at the feature level rather than at a paradigm level.
RE: As soon as you say, "OK, this is stupid, let's add support for polymorphic module interfaces," well, you've basically re-invented object oriented architecture. More precisely, you've reinvented interfaces, a feature that basically turns your module into a kind of class, because now you've introduced the ability to have multiple concurrent instances of your "module" co-resident with each other, each offering their services to objects that they own. -- Samuel Falvo
Is this truly object-oriented? as in: results in Object Oriented Programming? In my own experience, if you squint hard enough, anything is an object: functions in funtional programming, whole processes in operating-system programming, whole operating-systems in distributed systems programming, etcetera. DLLs are objects to the dynamic link-loader. It doesn't surprise me that you can consider parameterized modules and such to be objects. Of better question is whether that view is sufficient to make the programming object-oriented. I'm not particularly keen on trying to answer this question. (To me, There Are Exactly Three Paradigms in use that are truly 'meaningful', and 'object-oriented' is not among them.) But I do note that mere support for parameterized modules would not meet Alan Kays Definition Of Object Oriented, would not meet the Nygaard Classification for Object Oriented, does not imply inheritance or any sort of implicit support for delegation (for Polymorphism Encapsulation Inheritance), and fails many other various Definitions For Oo. Of course, Nobody Agrees On What Oo Is, so perhaps Samuel Falvo's definition is as good as any other.
There are languages that provide parameterized and abstract modules with concurrent existence. Consider: if a programming language is modular with neat, polymorphic, parameterized modules and such, but did not provide modules as first-class entities that can be produced or replaced on-the-fly from within a language, would this really benefit testing? Perhaps it is having 'first-class' component-configurations that is most relevant to making units easier to test. Component configurations are necessary when dependencies go two directions (i.e. component A depends on something from component B, like a callback, and component B depends on something in component A, like a call.)
What means "module" in a programming language depends really upon how processes are glued together from constituent parts. Merely having "modular" components, be they parameterized or polymorphic or not, doesn't make easy the task of configuring these components.
Samuel Falvo says: This is what I am talking about. You can have support for parameterized modules, but the fact is, you're still invoking that module, not your mock module. Ml Language supports parameterized modules, for example, but they're utterly useless from a unit testing perspective.
Structures and State
Moved from Object Oriented Design Is Difficult
Some opinions on OOP from a structured programming perspective (not using a true OO language):
Although some of us hate the hype behind OOP, we have found some ideas from OOP to be very useful in structured programming. OOP techniques can save typing, shorten the code, reduce copy and pasting or include file tricks, can make code more readable and less error prone. However, the opposite is also true: when not used carefully, OOP can bloat up code, overly complicate code, cause more errors (especially if dangling objects exist), cause a lot of uneeded line noise (free/create/new/destroy, etc). This is why some prefer to use the stack, along with modules, and not just objects. Some are not fond of purely heap based OOP languages.
Consider a case where some OOP techniques are useful: you have a struct/record that you wish to fool with. Say you need a new experimental structure based on an old one. You could copy and paste this old structure into a new file and play with it, or you could cast the struct/record using aligment tricks in C/Pascal. With old procedural coding you end up having to do these dangerous tricks, or copy and pasting to reuse that old structure. Sometimes you can write some procedures to wrap the old struct/record - but then you end up reinventing inheritance and writing boiler plate code! With OOP it hides this boiler plate code, it hides the "self" or "this" or "me" parameter you would have otherwise sent in to the procedures as a pointer or VAR param. In OOP you can safely inherit and play with a struct/record (class) without copy and pasting the old one, or without writing procedural wrappers which emulate inheritance!
Cee Is Not The Pinnacle Of Procedural, nor is Pascal. For one, their structures are static at run-time. If they were based on Set Theory, then one could use set operations to add or remove elements from an existing set of fields. Set theory is far more flexible for managing variations-on-a-theme than sub-typing. If you don't understand why this is so, we will never agree. Limits Of Hierarchies gives some examples. Without more details about what actually you are trying to acheive (business case), I cannot suggest an alternative language or solution. Related: Delta Isolation.) --top
What? Of course you can create dynamic structures at run time in Cee or Pascal. You just can't query them using a relational language so easily. But it could be done using parameter tricks, or a macro preprocessor. Consider an Array of Records or an Array of Structs. You can add and delete items from the array. A record or struct is in fact a Tuple in disguise. An array cannot be queried easily, but one could do it using tricks. Items in an array or list can indeed be inserted and deleted - at run time. Arrays or lists can expand and be saved and loaded from a storage medium too (usually via files, but we don't have to see them as just files.. they could be relationally stored in several different files, split up for optimization - that's the physical issues we shouldn't worry about). The values of arrays or lists can be changed at run time. The trick would be making the items in the array more query-able then what is currently offered, perhaps reinventing Rel Project or TQL ideas. Lists and arrays usually only offer ridiculously simple operations such as remove, delete, append, insert, etc. There is no query language available for the array, or list. Databases are actually a lot about queries more so than people realize. What's missing in Objects, Structs, Records - are queries.
As for requiring Business Cases, I've provided plenty that you've missed. More concretely, consider a GUI button widget where one wishes to modify the button to have a border around it. Inheriting the button without damaging the old code or using copy and pasting allows us to muck around with a new button. The button does not need to be stored relationally - are you going to store the button's' caption and click behavior in a table? And how are you going to inherit this button without resorting to WinAPI style coding using procedural code, if not using relations?
If you don't mind, I'd like to stay away from GUI engine design here and focus on biz domain objects (customers, accounts, invoices, etc.). For one, most custom app developers do not develop GUI widgets from scratch. Second, GUI's are a very long and involved topic. The answer to the above would likely depend on the details of a specific GUI engine: one GUI engine may have a difficult spot where another doesn't and vise verse. (A set-friendly GUI engine would generally start out widgets with all possible behavior and features and filter-based "elimination" would be used to exclude a border from a button, not the other way around as often found in tree-oriented thinking.) I'd suggest you visit Programming Language Neutral Gui if you are interested in such.
Moved response to Domain Niche Discussion.
One problem with OOP is that the algorithms become welded into the class, even though we aren't sure that algorithm will only need to be part of that class! This leads to messes such as multiple inheritance and overly complex solutions (IMO). Some algorithms simply need not be welded to a class and we can't decide on this up front immediately! As a structured programmer speaking here, I therefore do not use pure OOP languages or any language that tries to be pure (Java, etc). However, even in these so called pure languages - one can escape the object system by using the global class (i.e. in Ruby one can DEF a procedure without it being tied to a specific class). Many so called pure languages still have ways that you can escape the OOP model when you need to.
If we look back, using POP (Procedure-Oriented Programming), we must deal with all the concerns in a line. Though we can outsource the code into different functions, the main stream still controls all the process. This is the linear model. When OOP is introduced, we can present the world in a more natural way by describing different objects and their functions. Connections between different objects form a network, a matrix of type vs. behavior. This can be called the two-dimensional model.
How is "natural" being measured here? I am bothered by that claim.
"Natural" is somewhat informal, but the improvement mentioned above could be measured in terms of the degree to which unrelated concerns can be expressed (and developed & tested) independently of one another. OOP, when introduced, was a step up from POP languages of the same era due to the indirection from interface to implementation (message passing or virtual functions). One may encapsulate unrelated concerns in different objects, and these objects may then interact blissfully ignorant of the concerns encapsulated by their companions. Compare POP, in which the calling procedure must be aware of and test for 'unrelated' concerns so that it can properly select a procedure to call, forcing the system to be coded with a much more 'global' policy and more 'global' data available to procedures.
Of course, OOP still fails grievously when dealing with related concerns such as concurrency management, persistence, logging, optimization decisions, memory management, etc. As a class, these are called Cross Cutting Concerns. If OOP succeeded at these, there would be no need for AOP. Similarly, OOP also runs into the Expression Problem - adding new 'verbs' to an interface requires touching every class in the project.
Re: expressed (and developed & tested) independently ... step up from POP languages of the same era ...
I'm skeptical. I'd like to see semi-realistic coded examples.
The simplest examples are available in procedures containing things such as 'baker.bake(ingredients)'. In procedural, the calling procedure must contain the information to locate the correct bake method, such as 'if isBreadBaker(baker) then bakeBread(ingredients) else if isPieBaker(baker) then bakePie(ingredients)'.
This is covered in Switch Statements Smell. Your example assumes a hierarchical subtype-based model of domain nouns. However, in practice variations-on-themes are best modeled with sets (buffet of semi-independent features), not hierarchies. -- top
You are in error to claim the example has anything to do with a "hierarchical subtype-based model of domain nouns" - it doesn't matter whether or not there is a 'Baker' class, only whether the object currently called 'baker' can accept the 'bake (ingredients)' message. OOP does not necessarily imply hierarchical classification schemes.
Also, 'doer.do(parameters)' is a pattern that applies to Functor Objects in general, and is generally not applied to Domain Objects. I'm not particularly fond of Domain Objects, and I agree that your "buffet of semi-independent features" (Feature Buffet Model) will often be better for modeling them. OTOH, neither procedural nor OO are particularly adept at handling Feature Buffet Model - better, I expect, would be a rules based approach.
Traditional OO polymorphism works well with "toy" examples, but not with realistic ones, at least in my domain. OO does not natively handle sets and many-to-many relationships, let alone the ability to query complex objects ad-hoc, and I consider these a flaw with it. Most of the claims on this page would melt away under the scrutinty of more realistic examples (at least for some domains). --top
I agree that OO does not effectively handle many-to-many relationships, multiple dispatch, fine-grained specialization, 'open' specialization, or complex queries. OTOH, neither does procedural, which was OO's main competitor at the time these claims were made.
Please clarify. OO's competitor is both procedural and relational. They compete with different aspects of OO. I agree that if by gun-point I was not allowed to use relational or databases, I'd probably prefer some OOP over just procedural. Limiting the discussion from powerful combo's would not be very useful, unless we want to pretend we're in the disco era. Procedural and relational compliment each other well without having similar territory to fight over. OOP half invents a database, but leaves out too much. --top
Hmmm... I can see how my statements would be confusing. There are OO databases (OODBMS), which compete with relational, and then there is Object Oriented Programming (OOP), which does not compete with relational. The two don't coincide or compete except when programmers start Reinventing The Database In Application (which does happen, but is not a practice I recommend). In general, these applications of 'OO' should be evaluated separately. My statements above were made in reference to OOP, not OODBMS.
Again, I disagree about OO not competing with relational. The Visitor Pattern is a prime example of something that relational would do differently. When you make a choice to use visitor, you are un-choosing a relational solution. As far as OODBMS, some say there is no such thing because OODBMS do not have any real encapsulation. They only share OO's navigational structuring (that is, a mass wad of pointers with a dab of tree-ness), which almost nobody would promote if OO didn't do it. --top
How does use of Visitor Pattern "un-choose" a relational solution? Explain this to me in logical terms.
A given solution cannot be both at the same time (at least not without making a mess). Therefore, it must either be an OO solution or a relational solution.
[What do you mean by this? OO and relational approaches are complementary rather than competing.]
Agreed. I feel Top Mind is assuming the consequent here.
I don't understand why this isn't more obvious. There's some kind of translation/communication block going on. Take the Double Dispatch Example. The solution is pretty much going to be either OOP, relational, or some other paradigm. --top
In Double Dispatch Example, your 'relational weenie' example implements OOP techniques and stores (or references) the scripts/functions for each 'printer object' within the RDBMS. Refer back to page anchor OOP_tehniques_impl.
You mean code or references to code in data structures? OO has no monopoly on that. Lisp was doing it about 7 years before the first OOP language. I'm not sure what paradigm we should give that credit to. Perhaps none owns it. It was easy to do in machine language. Later languages made the technique more difficult, subtracting that ability, perhaps fearing it made programs less predictable. See History Of Code In Structures.
(page anchor not-just-code-in-structures) I mean dispatch based on associating 'objects' (printers) with 'code' (scripts) to process certain 'commands' (printing shapes) into which you'll need to feed both other data about that object (so that the script can print the shape to the right printer) and other data about that command (so that the script can print the shape with the correct size and position). And hand-implementation is the early stage in the life cycle for nearly all programming language methodologies and techniques - that doesn't make them any less OOP techniques, it only makes them less OOP languages.
Records = objects is too much word-play in my opinion.
Funny. I never suggested 'records = objects'. It is Object Identity, and the ability to associate attributes with such an identity, that makes an object.
But anyhow, both approaches "put code in data structures" (or code references). In that aspect, they both use a similar technique, and I don't give OO credit for inventing that technique as described in History Of Code In Structures. Merely using a common technique does not break my overlap claim any more than both using Quick Sort makes them non-overlapping. In one, the structure is objects tied together via references, in the other the structe is a RDBMS table.
History is irrelevant. Credit for the technique is irrelevant. That one has "put code in data structures" is, guess what? That's irrelevant too. OOP is not about 'code in data structures' (and many implementations do not put the dispatch code into the data structures). OOP is about dispatching messages and commands to things associated with object identity.
Regardless, your Double Dispatch Example is a poor example if your goal is to demonstrate relational and OO not working together, even though I don't believe it a good example of them working together. Relational and OO are readily integrated by using OO to, according to database provided configurations, construct programs on the fly for processing 'inputs'... which may also come from the database. As noted elsewhere, the only thing RDBMS is missing to make this complementary union nearly perfect is the ability to 'subscribe' to queries with Delta Isolation (e.g. via dynamic insert/update/delete events on the 'view' so subscribed).
The last 2 paragraphs are not clear. One is putting the code references in tables *or* they are putting them in objects/classes. It's a pretty strong dichotomy. They *are* fighting over territory. It's hard to be much clearer. You either put "the stuff" in tables, or you put "the stuff" in objects. --top
Your assertions are clear, Top. They're just wrong. OOP can be implemented using tables, at which point putting code references into objects/classes and putting them into tables will happily coincide. And that (hand-implementing OOP techniques using tables) is exactly what your Double Dispatch Example does.
They are not wrong. If so, please re-word as formal logic. code-in-structures is NOT unique to OOP as already described. That is a fact. It's like saying that inheritance provide defaults by having the default behavior in the parent class; therefore, anything with defaults is "OO".
As formal logic: (A) OOP may be implemented using tables by associating object identity with properties (attributes, behavior descriptions). (B) If OOP can be implemented using tables, one will necessarily be putting the code references in tables *and* putting them in objects/classes. (C) Following from A & B, one can put code references into objects/classes and tables at the same time. (D) Therefore, your assertion to the contrary (that this "is a pretty strong dichotomy", exclusive "either/or") is in error.
You seem to be assuming that emulation is equivalence. Relational tables can be used to emulate OO (if that's what is happening) because tables are flexible and powerful, NOT because they "are" OO. OO can also emulate tables. Turing Equivalency. If I emulate Small Talk in Java, does that mean that Java *is* Small Talk?
No, I'm only assuming that emulation is "implementation".
Regarding your "code-in-structures" statement: I agree, 'code-in-structures' is not OO. But your Double Dispatch Example, as I have already explained (at page anchor not-just-code-in-structures), is much more than 'code-in-structures'. Your Double Dispatch Example also has the following traits: Object Identity (foreign keys to printers), and dispatch to determine behavior based on recipient of request/command message (printer reference and shape identifier determines shape script... same as printer.drawCircle(), printer.drawRectangle()). Object Identity and virtual message dispatch are among the strongest defining traits of OO, being about the only features common to all OO languages (many of which lack classes and referential inheritance). Your Double Dispatch Example is a very clear implementation of OOP techniques, but using tables because you're a Relational Weenie and wanted to find a way to include tables.
This seems to get into Definitions For Oo. I disagree with your definition, but I will not take that up here. Further, "message passing" is rather open-ended. A function call is message passing. So is HTTP.
Indeed. But OO is not just message passing, either. It is message passing with certain properties: message passing to things with 'Object Identity' such that how the message is processed depends on the callee. It seems you're sticking your eye up right against the bark and saying that you can't see the tree.
Overall, if we back up away from definition-of-OO issues, one is still faced with the implementation dichotomy of putting code (or code references) in table *or* in language classes. Thus, at least the OOPL (app language) is "fighting over territory" with the RDBMS. Do you at least agree with this? --top
I do not agree with that. Your Double Dispatch Example is proof that there is no implementation dichotomy. And, even if you put aside the fact that your Double Dispatch Example is implementing OO between its relational storage of Domain Object print drivers and procedural driven dispatch, that example does not serve as an argument (or even as evidence) that OO and relational are somehow "fighting over territory". All it shows is that you can use relational to implement OO.
And there are OODBMSs - even encapsulation may be included by supporting getters and setters - but I happen to agree that the design is fundamentally flawed because it enforces a model on data. I'm a person who favors 'thin table' relational solutions and rejects even the Entity Relationship Modeling as a basis for organizing data in an RDBMS.
If it includes Turing Complete setter/getter's, then it could be classified as an OOP programming language that removes the distinction between RAM and disk. However, I will agree that the distinction could be fuzzy. The Small Talk environment could potentially be considered a (non-relational) database, for example.
A programming system can be achieved by allowing communication to be triggered by update events in a database, but doing so is more Functional Reactive Programming than Object Oriented Programming. And the 'Turing Complete' issue is not a distinction with which I'd agree: (while there are advantages of such a restriction, there is nothing in RDBMS or OODBMS that requires avoiding Turing Complete views, updateable views, updates, and queries of data). The important distinction is on 'data' from outside the system vs. 'objects' built inside the programming system. Related: Object Vs Model.
The outside/inside dichotomy can be weak or fuzzy. A Control Table (or parts of it) may be very app-specific, for example, yet is as much part of the database as any other table. In my opinion, the definition of "database" is kind of like the Definition Of Life: there is no one trait that makes it a database (or database-ish), but rather a multiple factors. --top
Hmmm? It isn't so weak or fuzzy as you believe. Programs have inputs, and inputs are clearly from 'outside' the system. If you're confused, I did mean (and say) 'from' outside vs. inside, as opposed to currently stored there. Things like Control Table and other forms of configuration data and configuration management data tend to also be 'inputs' to manipulate a program - they're from the outside even if they are very app-specific. And while 'database' tends to refer to a persistent collection of data as opposed to the management system, I don't really see what the issue of database definitions has to do with the above (perhaps a miscommunication?).
I've converted Control Tables into arrays or function calls back and forth for performance or scaling or shop-style-preference reasons. The solution design is pretty much conceptually the same. It is mostly an implementation detail whether I put such info in a non-code table or in say arrays with hard-coded constants. --top
Indeed, whether you store code in an object file, script files, or a database, is an implementation detail. Data And Code Are The Same Thing, and Logic Programming fully embraces this fact. What matters is communication between systems. In terms of data, it matters whether the code/data comes from outside a given computation system (i.e. is subject to external manipulation by users or environment) or is internal to the system.
Please clarify. "Outside" is a matter of perspective. Cold Fusion has its own internal SQL processing engine and internal tables. It is not "outside". Moving the same table to a non-Cold Fusion RDBMS does not change the basic nature of the design. (CF's internal tabling has some big flaws, but irrelevant to this.)
"Outside a given computation system" is a matter of topography and is well defined for any given computation system. And to help clarify: really doesn't matter where you store or process the data, only matters whether the data is under control of the computation system or its environment.
If only the app has the password to a given RDBMS table, then only the app "has control". Again, that seems a very minor thing to pivot big distinctions on.
Control within a computation system is not at all a minor thing, though your lack of System Programming and Programming Language Theory background might lead you to believe otherwise. But, while security measures like passwords support distinguishing control, they don't matter nearly so much as the control itself, which really comes down to service contracts. For example, the application also has control over that table if others are simply not allowed to touch it and the application itself is allowed to fail if anyone else touches it. If the application is required to accommodate changes to the Control Table, and things outside that application are allowed to change that Control Table, then control is clearly outside the application, and the current state of the Control Table must be considered by programmers an input to the application (something that cannot be wholly anticipated, certainly cannot be compiled into the program via Compile Time Resolution and subjected to Partial Evaluation, probably needs documented like other application inputs, must be maintained for backwards compatibility, etc.), and must further be considered by users as a configuration for the application (something that can be tweaked, and that will need to be versioned and managed carefully). It's a big thing, and so big distinctions pivot on it.
We'll just have to Agree To Disagree. I view it as an implementation detail. It can be swapped back and forth without changing the basic nature of the design. In my opinion you are over-interpreting or over-magnifying side issues.
I've the impression that we're talking past each other. I agree that "where the Control Table is stored" is an "implementation detail". What you don't seem to grok is that "where the Control Table is stored" does NOT determine "who controls the Control Table" (this being the logical consequence of service contracts determining control). I think you assume that I would find error in your implementing a Control Table in an external RDBMS and calling it an 'implementation detail' of the application. But I don't have a problem with you doing that... with some exceptions. As I understand it, an 'implementation detail' is something that is 'encapsulated' in the 'implementation' and may be changed without breaking client code. If you allow clients of your application to tweak and re-configure that Control Table, to maintain configurations, and to treat the Control Table as an interface to the application, it is no longer an 'implementation detail'. (The importance of the distinction becomes much more obvious to programmers that have more than one client for a given unit of software.)
Regardless, I do agree that we're falling away from the topic subject, and that this conversation can be tabled.
Could you please make changes to Are Oo And Relational Orthogonal Discussion Three? I was in the process of moving your changes over, but they happened faster than I could keep up. Thus, I'll let you do the final appendings. Thanks.
In a unit testing framework, or in independent development, it would be easy to create a 'mock' baker that can report which ingredients it has baked. This mock baker could be queried after tests to ensure the caller is working correctly. These tests, and the mock baker, would be able to coexist with the final application code. In the procedural methodologies, independent testing would require mock-implementations of 'bakeBread' and 'bakePie' and so on, and the testing framework would need to be built and maintained independently of the application.
And to stem a likely objection: I fully agree that one could use Object Oriented techniques even in a Procedural Programming Language. One is free to call a 'bake' script or 'bakefn' function associated with the baker. One could even put such a script into a database. But most people would just say you're reinventing OO. Object Oriented programming languages only aim to make it easier and sometimes safer (than in procedural languages) to use these techniques. (page anchor OOP_techniques_impl) {My objection is near that anchor also.}
That's reinventing Lisp, not OOP. And, it only "makes it easier" in simplistic textbook examples or device drivers, not production design.
It is OOP, which was later reinvented in Lisp as CLOS. And read the above again: "make it easier" refers to the use of OOP techniques in OO languages (relative to the use of OOP techniques in procedural languages) - it isn't something I'd expect you to contest. Refer to prior sections of the page if you wish to contest claims about the OOP techniques themselves providing benefits over procedural methodologies.
I don't believe this is correct. That ability existed in early Lisp.
[Object orientation (in Simula I) dates from 1962, which is roughly five years after the invention of LISP, but CLOS and its direct predecessors date from the 1980s.]
The ability to implement CLOS existed in early Lisp, if that is what you mean, Top Mind. But, by that reasoning, even Snusp Language is OOP. Please don't appeal to the lowest of the Four Levels Of Feature.
No real effort to support for Object Oriented programming techniques in Lisp existed until the late 70s, well after Smalltalk Language and Simula Language had some time to gestate among the people at MIT. (Well, there was at least one other effort, LOOP. Not sure when that one started, or what happened to it.) Anyhow, CLOS is another case of building one language inside another - a rather well developed feature of Lisp.
I thought it was Simula II that introduced the OO features, not Simula I. Anyhow, since Lisp makes it easy to mix data and code, any data structure can also contain code. Thus, it may have:
(bakers (baker01 (attributes...) (code for baker01...)) (baker02 (attributes...) (code for baker02...)) (baker03 (attributes...) (code for baker03...)) Etc... )
Although I agree such may be an "OOP concept", it was not "invented" by OOP nor is it exclusive to OOP. Other paradigms may rightfully claim it as one of their techniques.
Code in structures without any extra support is Functional Programming. It doesn't help with constructors or inheritance, with automatic 'self' reference, with dispatch, etc.
Constructors could also be considered a form of Event Driven Programming. It's an "on-create" event. Again, it's a "shared feature". The others may be also upon further analysis.
Ah! I think I understand. You're objecting to the idea that OOP perhaps "owns" certain ways-of-organizing-code, and they can't exist within other orientations or designs. Who is promoting this idea to which you are objecting?
I suspect you'd find many that people do, in fact, believe in the existence of Multi Paradigm Programming Languages. I certainly do. Since I believe in Multi Paradigm Programming Languages, I don't consider features to be "owned" by paradigms/orientations; rather, I say that features are "necessary" to paradigms/orientations, or that a paradigm/orientation is "supported" by a language having a certain non-exclusive sets of Key Language Features. Any given language feature might be necessary to supporting many paradigms.
Actually, mixing data and programming code was more or less how things started out in computers. They were separated to help manage code, ironically. Now we are shifting back that way as modern techniques allow the power of combining but hopefully without most the original downsides that made it frowned upon.
Re: Reduces method length
I'd like to see a demonstration. The only example I can think of that made such claim is related to the controversial Switch Statements Smell.
Re: Reduces the impact of requirement changes on code.
Moved discussion to Oop And Change Impact.
Contrast Arguments Against Oop
See original on c2.com