See Nygaard Classification, for the definitive definition.
Nygaard did not coin the term "Object-Oriented Programming," Alan Kay did, so I fail to see how Nygaard's classification is "definitive". Yes, Nygaard and Dahl's Simula was the first language to have "objects" in it, if you ignore Dr. Ivan Sutherland's Sketch Pad that predates it by five years, but regardless, Nygaard and Dahl did not use the term OO to describe Simula.
Alan Kay and the Xerox Learning Research Group borrowed from many languages, including Simula and Lisp, when making Smalltalk. In Smalltalk, everything is an object, and every action is accomplished by sending messages to objects. It was for this reason Kay described his language as being "object-oriented." Smalltalk was responsible for popularizing OO as a paradigm. Smalltalk is to OO what Lisp is to functional programming. To this day Smalltalk is considered the holy grail of OO; every single OO language is compared to it. Simula, unfortunately, is nothing but ALGOL with classes.
So if any one person's definition of OO is definitive, it's probably Alan Kay's. See Alan Kays Definition Of Object Oriented and Alan Kay On Objects.
During the course of history many authors tried to redefine the term to mean lots of things. Some of those Definitions For Oo are discussed below.
For good or bad, the "street" ultimately is what defines existing terms, not the originators. For example, "decimation" used to mean the Roman practice of punishing armies by killing one out of every 10 soldiers. Now it means total destruction, not a tenth. Nygaard may the Roman here.
Ah, very clever observation - if in fact you can follow up with what the street has decided the term means. Which you can't, because the street hasn't decided.
Are you suggesting we go with the originator because the street has not reached a consensus? Generally dictionaries tend to rank the different definitions. Thus, if it fits the most popular variations, then it is more OO than something that only fits a lower definition. Hmmmm. Does something that fits 2 lower definitions get considered a better fit than something that only fits the top def?
(1) criticizing an argment that argues against topic A is not the same thing as supporting A (might or might not be). (2) Dictionaries sometimes, but not always, give an indication as to which definitions are more common. That does NOT make the less common usages less correct.
Most contemporary dictionaries do not claim to tell which usages are correct; they tell which usages are or were in use.
The "Street" does not decide the meaning of terms. That's what the academic community is good for (and dictionaries). Otherwise the whole of human language would dissolve to ambiguity, especially if there are those who just want to forcefully use terms in speech to grab power. In any event, if there's not a good consensus within the publications of academia, or if no leader is staking out the definitions used for and in the field, then it should go with whoever coins it first.
Since we're talking about Romans, plebeians do not write history. Nygaard may not be popular among the large masses of people monkeying around with keyboards, but he certainly is relevant where it matters. Important books like Si Cp and Theory Of Objects subscribe to Nygaard's definition, and therefore generation of elite students are going to learn that. The books that popularize the other stuff for the consumption of the "street" are likely to be swallowed by the trash bin of history.
Although not an air-tight argument, I'm inclined to agree with this. Unlike street dialects, technical jargon tends to have authoritative sources of judgements of correctness, as do technical issues in general. Mis-use of the technical jargon by the masses constitutes a borrowing with a shift in meaning, not a change in the original meaning. Classic example: "quantum jump" in common parlance does not mean the same thing that it does in physics. That does not make the physics meaning incorrect. It remains intact.
But the masses in this case are not necessarily "wrong". Physics cannot be changed by the masses so there is a central point to compare. Software design has no objective core so far other than metrics such as run speed and code size, which most agree are only a portion of the picture. Plus, some "big names" disagree with Nygaard's definition, not just the "masses".
I suggest the above be moved to Definitions For Oo.
The heart of OOP is the Refactor Replace Conditional With Polymorphism.
I don't think this is a consensus opinion. Oo Best Feature Poll.
Consider a procedural program with several 'switch' statements, each switching on the same thing. Every 'switch' block has the same list of 'case' statements in it.
Topic and counter-arguments moved to Switch Statements Smell.
In English, the "object" of a sentence is the receiver of an action. For example, in the sentence, "Click the button", the "button" is the object.
''Unfortunately, in programming things called "objects" perform "actions". In this sense they are more akin to the English grammar "subject".
Correct. In English grammar the subject performs the action. The subject may receive the action when the verb is passive or intransitive. An indirect object could be argued to not receive an action, although in the whole scheme of things it does.
Object Oriented Programming focuses on "objects", so the above might be codified as:
button.click()
A contrasting example is Procedural Programming, which focuses on "verbs". This yields code like this:
click(button)
This involves an ideological change as well as a syntactic change. Rather than thinking about what events happen and how they happen, we think about what entities exist and how they interact.
An interjection here. "click" in this case is not an action being performed by button. Click is what someone else does to button, and hence button really is the object of the sentence. What seems missing is the subject - the entity performing the action "click" to which button is reacting. The subject seems almost always implicitly to be the object containing the code, in this case button.click().
No, it's only a trivial change in syntactic sugar. There are OO systems in which the normal function calling syntax is used, e.g. Common Lisp Object System: (click button). In either syntax, it's perfectly clear that button is a noun, and click is a verb. There is no illusion that in button.click(), click is anything other than a member function, and button is an object.
No, it's much more than that. In button.click(), click could be one of any number of methods, on any number of buttons, while click(button), click is one method for any button. The reason OO programmers use that example, it that it's a good way to get procedural programmers to see the difference, as most of us were procedural programmers at first too. It's not so much about the syntax as about creating a syntax that allows multiple virtual implementations. Procedural paradigms don't do this, object systems do, and CLOS is an object system. It has those multiple virtual implementations.
The Object-Oriented principles are emergent properties of this shifted focus.
See the Object FAQ at www.cyberdyne-object-sys.com , which includes the following:
Simula was the first object-oriented language providing objects, classes, inheritance, and dynamic typing in 1967 (in addition to its Algol Sixty subset). It was intended as a conveyance of object-oriented design.
The Simula67 language [Simula Language], invented by Ole Johan Dahl and Kristen Nygaard, was the first Object Oriented Programming language - Simula did not invent the object oriented approach - it implemented objects and so on. The original drafts called objects 'processes', but Dahl decided to use the word "object" about a year before the release of Simula67 (which was in 1968 of course!). The modern terminology of classes and inheritance was all there in the language. The term Object was possibly coined by Ivan Sutherland before this for something very similar, and the term Object Oriented was probably coined by Alan Kay (see Alan Kay On Objects and He Didnt Invent The Term for further discussion).
In terms of programming (as opposed to design), an OOP language is usually taken to be one which supports
Encapsulation
Inheritance (or delegation)
Warning!! Warning!! This definition is considered controversial in some quarters. For instance, Encapsulation is not part of how Perl implements OO; some people might want to add classes to the mix, but that excludes prototype-based languages, like Self Language, which definitely 'feel' OO. In particular, features commonly added and removed from that definition include:
Object Identity
Automated memory management (not in Cee Plus Plus)
Classes (how is this different from Encapsulation?)
Abstract Data Types
Mutable state
At an absolute minimum, "objectness" represents combining state with behavior. (Attributes/properties with methods. Or just slots as in Self - the distinction between attributes and methods is artificial and a Bad Thing if you ever exposed an attribute and want to change it to a method.)
I'd love to try a Wiki Weighted Vote on the above. If two others agree here, we'll set it up.
Encapsulation: (1) attributes of an object can be seen or modified only by calling the object's methods; or (2) implementation of the object can change (within reasonable bounds) without changing the interface visible to callers.
The 2nd is the stronger definition, arguing for minimizing the number of "getter/setter" "properties" exported by the object, substituting "logical business interfaces to functionality" instead.
The distinction I learned was between encapsulation as def. (2) and data hiding as def. (1), the more restrictive version in which encapsulation is enforced rather than advised. Encapsulation(2) is OOP-universal, as far as I can tell, as the primary abstractive technique--making the difference between, exempli gratia, a Cee Plus Plus class and a Cee Language struct. Data hiding(1) is rejected by Python Language and others. I personally don't mind it as a simplification tool as long as it isn't too hard to bust (Ruby Language). -- J. Jakes-Schauer
Inheritance (or delegation): Definition of one object extends/changes that of another.
COM & CORBA camps would argue that this is an implementation issue not fundamental to "OO-ness."
(However, it's a very convenient feature for code reuse, particularly in frameworks.)
Better make that COM, CORBA, and Ada. The Ada Language, before it supported inheritance, was generally considered to be an object-based language. This terminology was used to differentiate it from the object-oriented languages. See Object Based Programming. -- David Mc Reynolds
Polymorphism: The ability to work with several different kinds of objects as if they were the same.
COM, CORBA, Java: Interfaces.
C++: "virtual" functions. (Or "interfaces" with Abstract Base Class-es and "pure" virtual functions ("=0" syntax).)
does this not apply to Java abstract methods, too? [It applies to all object methods in Java.]
Smalltalk: Run-time dispatching based on method name. (Any two classes that implement a common set of method names, with compatible "meanings" are polymorphic. No inheritance relationship is needed.)
Discussion:
I don't believe Java Interfaces have anything to do with Polymorphism, and "Interface" in CORBA is synonymous with "Class" in other Object Oriented languages. Any two classes that share a common parent and provide alternate implementations of a single method defined on that parent are "polymorphic". Alan Kay describes the polymorphic style as "Call By Desire" (as opposed to Call By Value and Call By Name). -- Tom Stambaugh
Just because you don't believe it doesn't mean it's not so. Java interfaces allow for pure polymorphism (independent of inheritance). They do also have other uses, an you may never use them as a mechanism for including polymorphism in your designs, but the mechanism is there.
Absolutely right for most OO implementation languages, like C++. But...
In COM and Java, any two objects implementing the same interface are (for the most part) "substitutable" - you can use one in place of the other. COM has no implementation inheritance, so "sharing a common parent" (other than IUnknown) is not meaningful (unless one starts talking about a particular implementation language, like C++). Java classes are limited to single inheritance, but may implement any number of interfaces. So Java tends to use interfaces over class inheritance for polymorphism.
Yes, CORBA "interfaces" tend to also mean "classes." But, two CORBA servers can implement the same CORBA interface, to achieve polymorphism. -- Jeff Grigg
CORBA interfaces support polymorphism as well as COM or Java interfaces, and references to CORBA interfaces are just as "substitutable". You can have as many different implementations of a CORBA interface as you want. I don't think one-to-one correspondences between interface and implementation are any more prevalent in CORBA than they are in any other object-oriented system. (Or am I missing something here?) -- Kris Johnson
I'd say that the two required features of an Object Oriented Language would be Encapsulation & Polymorphism. Inheritance is just how some languages provide Polymorphism. -- Jon Strayer
See "Is Java Object Oriented?" "Irrelevant" Questions of this type, and especially on a page with the title Object Oriented Programming, are always relevant. Not "Popular" with some, but worth investigation by one considering the language for use.
This exclusivist discussion does great disservice to the OO community. Really, everybody wants OO to be his side of the story.
Will anyone agree that there are several OO models, theories? And will anyone bother to give references to actual OO theory instead of going round in circles with Booch, Rumbaugh and the likes.
And by the way, the distinction between button.click() and click(button) is really naive. "Syntactic and ideological", this gives feed to those who sustain that OO has nothing to do with Computer Science, when in fact only a part of OO folks don't want to depart from overly simplistic intuition about what they think OO should be. -- Anonymous Coward
I propose that Programming Is In The Mind, and that Object Oriented Programming is one of many mental models that assists a programmer's thinking. I agree that dogmatic statements about what is and what is not OO are generally useless. I think a more useful discussion would endeavor to discover the various concepts used by programmers, then evaluate them for their costs and benefits and how they interact with each other. As I said elsewhere, I think the various concepts are better modeled on a continuous scale rather than a boolean (Is/Not OO, Good/Bad, Like/Dislike).
-- Rob Williams
I will add my vote to Rob's statements. As soon as you invent a label, in this case OO, you have created a church/institution. That leads to territorial squabbles. It also leads to ideology, historical revision and denial. I don't need all that baggage, I've got a job to do. -- Richard Henderson.
On a continuous scale, the statements have a boolean ring, i.e: "generally useless" "concepts are better". It is hard to avoid boolean when considering use/not use. Let alone in programming constructs within OO programming such as If, ifElse, for, do, case, and While.
It might help to think in terms of what OO tries to achieve, what was the motivation for developing OO languages? Maybe that will shed some light on what OO is.
For one, I agree that programming is in the mind. Because OO is a step closer to the way humans think (taxonomy is a good example), it helps us design and write programs that are easier to understand, promote code reuse, cohesion, and decoupling, and data protection.
While hierarchical taxonomies might model the way *some* people think (every brain is different I would note), I do not find them very reflective of the way the real world changes over time. See Limits Of Hierarchies. I think set theory is a better classification approach than trees in most cases, but implementing it tends to lead to Relational Database-like techniques, which leads us to the Holy War of the Object Relational Psychological Mismatch.
[Top, get hierarchies off your brain, OO isn't about hierarchies, that's just an implementation detail to help organize code.]
I was responding to the "taxonomy" statement, NOT saying that OO is all about trees.
[But that's not why we like objects. OO is about modelling the problem with "things", intelligent "things", objects. Objects, not Classes, but object instances, say for example aPerson and aCompany are easy to reason about, it's easier for most people to reason about them by imagining a conversation between them, what might those objects say to each other or do to each other... maybe aCompany.hires(aPerson), or aCompany.fires(aPerson) or aPerson.quits(aCompany), or questions like aCompany.employs(aPerson) or aPerson.worksAt(aComany) or aCompany.isOwnedBy(aPerson) or aPerson.isCompatible(anotherPerson) or aCompany.isOwnedBy(anotherCompany)... or aCompany.employs(anotherCompany)... oops, the last two messages were used twice, but that's ok, polymophism picks the correct method for isOwnedBy depending on whether I pass in a company or a person, same with employs, one message, but it works for several different contexts, just like real human language. If that were procedural, I'd have to remember a far greater number of language elements to accomplish the same work, and I've have to remember where they're at and what it's the generic recordsets that I'm passing around instead of objects, which modules to include.]
Why not model such relationships with relational modeling? Then we don't have to wade through bulky code to see the relationships, and it is more consistently implemented from developer to developer. And we can query it more easily than we could we that kind of crud hard-wired into code. Cant Encapsulate Links.-- top
Why don't you use a declarative/logic programming language then? Owns(person, company). Owns(company, company). Are-compatible(person1, person2). It's funny to see you folks talk about abstraction and polymorphism while getting all excited about syntactic sugar for physical pointers.
OO is not just syntax, it is also an approach. It is possible to implement OO principles without using OO syntax. The syntax is just there to encourage the principles. For an example of OO practices without syntax, look at api.drupal.org .
[I just explained why.... Owns(person, company) forces me to remember that Owns is a valid command, if there are 300 messages in the system I must remember them all, and I simply can't remember that many commands, whereas aCompany.Owns(aPerson) forces me to remember nothing... I simply look and company and see what it can say to aPerson, or I look at person and see what it can say to aCompany. That little syntatic sugar of the . is everything, it provides scope to the set of messages an object can respond to and removes the burden of remembering those messages from me. There is a huge difference between method(arg, arg) and arg.method(arg) insofar as what the mind must comprehend and remember. It may all compile down to the same stuff, but who care's, its all 1's and 0's anyway. But at the language level that little . makes a massive difference in removing complexity from the programmers mind.]
And as Chris Date, Fabian Pascal et al. say, a relational DBMS is a tool for organizing facts (i.e. declarative propositions like above) about a domain. Why would you hand-code your own incompatible and byzantine version of said tool, making it completely dependent on one particular application and architecture? (There are certainly legitimate reasons, I am just wondering what yours happen to be.)
I am not top, but I can smell industry hype when I see it.
[But with objects, one simply remembers what they can say to each other, and if we forget, we know exactly where to look, we already have objects in hand, so you just look at them. It's literally like playing with lego's when you were a kid(not sure if you did). Once you have a few basic parts in hand, then entities of the behavioral portion of the problem domain, we don't really think about the data yet, that comes later. We have all the elements to build the program and solve any problem at hand, we can use the same elements to solve any number of different problems without having to create new elements. We can fluff out the data later as the need dictates. As behavior is added piece by piece, we add any data required to support that behavior. OO programs have an organization at the object level (not classes) that procedural programs can't. Procedual programs have procedures, OO programs have context sensitive procedures, the same procedure name may have 15 different contexts that it can work in, thus we can simplify our solution language. We could do the same with procedural by having one large procedure that has all 15 contexts in a big switch statement. While this makes adding new procedures easier, it makes adding new contexts to groups of methods damn near impossible. The OO solution allows adding entire new contexts a breeze, and we seem to find that more important and more useful. If the word "Contains" is a message/procedure to determine if one thing contains another, the procedural solution would have a big long Contains method that handled every possible case of input. The OO solution would have a Contains method for each and every different context.]
Why are you constantly bringing up the "procedural" strawman? I explicitly mentioned declarative languages (think Haskell Language). It's hard to understand your arguments when you conflate "object-oriented" with convenient ways to define abstract data types and polymorphism. The first does not imply the second; they are different concepts.
[I'm not using is as a straw man, it doesn't matter if your procedures support polymorphism, Owns(person, company) Owns(company, company), even if polymorphic is still harder to reason about than aPerson.Owns(aCompany) or aCompany.Owns(aPerson) because I would have to know about the owns functions. Take lisp generic functions for example, very powerful, very flexible, and still, unless you know and memorize all the available functions, you're not likely to know Owns even exists. They are indispensable when you really need multiple dispatch, but in most scenarios... single dispatch is all that's needed and the messaging object.method(arg) is simply easer to reason about because I have something to look at. I can see what messages the object responds too. Generic functions don't allow that, they make me remember it all. So, in most cases, single dispatch message oriented programming is easier, when it isn't, go for the generic function. I never claimed polymorphism was tied to abstract data types, just that they are easier to work with when tied together for most cases. Simply put... the object provides much needed scope to the list of available messages. If the Owns message only works with people and company, then it should be on either person or company, not living on it's own in a module. I love generic functions, but they aren't the answer in most scenarios. Maybe I'm missing some fundamental concept and we're just miscommunicating, I don't know Haskell, so feel free to correct me if I'm mistaken.]
There's absolutely no difference between "owns aPerson aCompany" and "aPerson # owns aCompany", it is just syntax. In fact, in Haskell I just define operator # in such a way that it transforms the second form into the first. But there's not much point to it, I don't need to delude myself regarding the "naturalness" of some design methodology. This debate on how the member access operator is somehow superior to anything else under the sun is simply stupid.
[aPerson.Contains(anAddress) is in a different context than aCompany.Contains(anAddress), but using context in this manner takes advantage of our natural social and language skills as humans. I just imagine the objects talking to each other, then it's suddenly clear what little conversation between them would solve the problem.]
[You may not agree, but most people do agree, OO is more natural, we are social animals and personifying our objects alows us to use those innate social skills to easily remember how a large number of objects interact, simply based on context. Language is based on mataphor, abstractions and context, OO provides all these using the same elements as the spoken language used to describe it. Nothing could be more natural than writing code like anOrder.add(anItem), aCustomer.add(anOrder), aCustomer.add(anAddress), anOrder.add(anAddress), add being used in several different contexts, but each one having an entirely different implementation, in a separate method. Most importantly, I don't have to remember that add exists, in a procedural program I'd have to know about add, in an OO program I just ask the object, if it supports add, then I use it. Most find it easier to maintain lot's of small simple procedures than to maintain a few giant procedures, and the smaller ones are safer because if you mess up one, it won't affect the rest. I'm sure you'll rebut everything I just said, but you always say you don't understand how we think and why we like OO, and I'm telling you why, I'm not arguing, and don't really need a rebuttal, I'm just saying this is how we think. It's been explained enough times by enough people that I don't think you can honestly say you don't understand how we think, you just have to say you disagree or you don't think that way... that's fine. But this is how we think and you have to understand that by now!]
It seems to me that OO is a logical progression from procedural programming. I started programming with Microsoft Professional Development System for BASIC. After a while (not long, 'cause I became a fanatic), I started designing and programming in ways that, in retrospect, I can see were object-oriented. I got a lot of satisfaction from being able to wrap up code in some high-level abstraction so I could make a procedure call that said, 'doThis(instanceData)' and not have to worry about what was involved at the nuts and bolts level. OO made it possible to wrap up the data too.
In a procedural language, my doThis(instanceData) might consist of several function calls, say doThisPart(instanceData); doThatPart(instanceData); and doSomeOtherPart(instanceData). If I wanted to mimic, so to speak, inheritance in this procedural language, I might write a second top level procedure, say doThis2(instanceData) that consisted of calls, doThis(instanceData); doStillThisOther(instanceData); and so on. One problem is that I don't have virtual functions from doThis(instanceData) that I can override, though I could further refine/break into smaller pieces, doThis(instanceData) and make calls to only the pieces I need.
OO gives us inheritance instead. Why is this approach better? -- Bryan Hoover
See Delta Isolation for a discussion on procedural "overrides".
How about the paradigm of "Event-Driven Programming" (yield flow control, write only event handlers, don't hog the flow, etc.), compared to and contrasted with "Object-Oriented Programming"?
e.g. the best O-O systems also have an event-based runtime system at their core? and the early event-based systems (e.g. Gosling's original Andrew wm at CMU, etc.) were limited by the lack of a good O-O class hierarchy (e.g. Andrew wm rewrite leading to Andrew BE2, Insets, ATK, meanwhile paralleled with Mac App, then NeXTStep, MFC, and Java).
What exactly is Event Driven Programming? The only big difference I see between procedural EDP and OO EDP is the various ways to find or dispatch the various events. One can use polymorphism or Multiple Dispatch, switch/case statements, or even table/structure lookups. Each group will probably champion their favorite.
General Principles or Mantra of OO (Not every one will agree with all of these)
A thing knows by itself how to handle requests given to it
Common behaviors can be inherited from parents instead of repeated in each child
Objects are like a small human given a set of responsibilities
Put behavoral "wrappers" around state or data so that class/object user does not have to know or care whether they are looking at state or behavior
Attempts at defining Object Oriented Programming:
What is Oo?
I don't know what I expected. This page looks like the beginning of a textbook on the subject. I just want to let the community know that I'm interested in OOP. * Frank Robinson
Are you suggesting that this topic should be broken up into smaller topics?
There are numerous books and endless chapters on this topic, but I've found that Object Oriented Programming is essentially the use of two activities: Swapping Classes At Runtime (which some like to call Poly Morphism) and Delegation Pattern. Other OO techniques, such as inheritance and encapsulation, support these two. A good Object Oriented Programmer learns to use these effectively, using the tools provided by his or her Language Of Choice, even if it happens to be Visual Basic 6.
Re: "There are numerous books and endless chapters on this topic, but I've found that Object Oriented Programming is essentially the use of two activities: Swapping Classes At Runtime (which some like to call Poly Morphism) and Delegation Pattern."
One gets widely different "OO essentials" depending on who they ask, I notice.
Which is why debates on the merits of OO tend to meander endlessly. Progress is made when the individual techniques and concepts that are often rolled together in OO programming are considered separately. Then they can be decoupled (some OO languages mix the concepts together in inflexible or awkward ways), studied, and improved upon independently. It is high time for the over-marketed 'OO' phrase to fade away and make room for more clarity and progress in software development.
[ That sounds very promising. Have you a pointer to such an orthogonal analysis with such clarity? ]
Ah, but that question brings one back to the problem: An analysis of what? Whose notion of OO?
My POV (so far:) is that OOP is wrongly named, and should have been called "Model Orientated Programming". The term "Object" is too restrictive, and gives the wrong focus when trying to understand what OOP is really all about:
An object is simply a concrete implementation of an abstract model that the programmer has in his or her mind. The methods of the object provide the ability to manipulate that model, thereby mapping that abstract model onto the implementation. Hence the "user programmer" manipulates that model (via the methods) and never has to worry about the implementation.
This idea was derived after I realized that an abstraction is simply a model (in your mind) which you then map onto something real (this provides interesting insight onto the nature of perception which isn't relevant here). As such, OOP provides a very good "implementation" of abstraction itself.
I don't know how one verifies the objectivity or accuracy of such a statement.
You could start by trying to generalize how you deal with the real world, because the real world requires your mind to make various simplifications (which might be called abstraction)...
If you define abstraction as ignoring irrelevant details, then the model is the relevant details, and mapping is the process of associating the model to the thing being abstracted.
It is then clear why the ability to be able to associate 'internal' state with procedures creates the essence of OOP - without such state you cannot maintain a model (which by definition always has some state). You can of course explicitly provide a (shared) state to plain old procedures, thus creating a model, but it's a bit messy... In the case of a Singleton object, such state does not need to be implemented using a block of memory (such as a Struct in C or Record in Pascal), but can instead be provided by global variables.
From such a definition it is clear that OOP's (or should that be MOP's ?:) essential characteristics does not include inheritance. You could argue that it does include encapsulation & polymorphism, but I'll leave that up to the intellectual bean-counters out there (I'm more than happy with just stating that it is model orientated and so provides a good implementation of abstraction itself).
Aside: (Single) inheritance is a cute way of organizing objects, so that you can efficiently share code & data with an incremental design strategy, but it is definitely not part of the basic requirements of OOP (MOP). You can imagine more complex (and much less efficient) sharing strategies (multiple inheritance being an unimaginative one). But (single) inheritance is a very easy to understand concept, which clearly doesn't conflict with using models for understanding, and indeed is commonly used in people's day-to-day understanding of the world, so that inheritance should be part of every OOP language even if it isn't actually fundamental. -- Chris Handley
(Please see Arguments Against Oop for an extended explanation of what I wrote here.)
More attention should be paid to the "oriented" part of object-oriented. I see objects as just a way of wrapping manipulable values in a certain kind of high-level construct. This is a good thing, and should be done in any high-level language that can afford the speed hit. Whether a language or programming style should be oriented towards those objects is a separate issue. I think it it doesn't give enough props to the verb, but that's just my opinion.
So, by way of example, Smalltalk is object-oriented. Python is object-based. Java, less object-based, because classes aren't objects and there are those primitive types; but more object-oriented than Python, because all functions are methods. Lisp, C++, not object-oriented or object-based; they just provide objects as one of their data types.
-- Brock Filer
See Also: Learning Object Oriented Programming, Benefits Of Oo, Arguments Against Oop, All Objects Are Not Created Equal, Alternate Object Oriented Programming View, Programming Paradigm
See original on c2.com