Alan Kay

talk
https://www.youtube.com/watch?v=oKg1hTOQXoY&t=615s

void pointer
https://archive.ph/5p1RZ "....f you focus on just messaging -- and realize that a good metasystem can late bind the various 2nd level architectures used in objects -- then much of the language-, UI-, and OS based discussions on this thread are really quite moot.  ....."

Which means the "late binding" of a void pointer. A function is an idea, but an idea isn't a function. Alan Kay's way of speaking is like saying: "... the bullets contain the gun ...."

Bluebook
http://stephane.ducasse.free.fr/FreeBooks/BlueBook/Bluebook.pdf archive.org bluebook.pdf smalltalk book written by David Robson from Xerox.

Commentary on Kay

 * Kay wrote:".... The big idea is "messaging" -- that is what the kernal of Smalltalk/Squeak is all about (and it's something that was never quite completed in our Xerox PARC phase). The Japanese have a small word -- ma -- for "that which is in between" -- perhaps the nearest English equivalent is "interstitial". The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be. Think of the internet -- to live, it (a) has to allow many different kinds of ideas and realizations that are beyond any single standard and (b) to allow varying degrees of safe interoperability between these ideas....."

properties are structs and behaviors are functions taking those same structs as it first parameter(pointer to it), making multiple inheritance impossible.


 * Kay wrote:"....I think I recall also pointing out that it is vitally important not just to have a complete metasystem, but to have fences that help guard the crossing of metaboundaries. ....."

''would these metaboundaries be inside our outside the structs?! ''

Original post by Kay
http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html prototypes vs classes was: Re: Sun's HotSpot Alan Kay alank at wdi.disney.com Sat Oct 10 04:40:35 UTC 1998 Previous message: prototypes vs classes was: Re: Sun's HotSpot Next message: prototypes vs classes Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] Folks --

Just a gentle reminder that I took some pains at the last OOPSLA to try to remind everyone that Smalltalk is not only NOT its syntax or the class library, it is not even about classes. I'm sorry that I long ago coined the term "objects" for this topic because it gets many people to focus on the lesser idea.

The big idea is "messaging" -- that is what the kernal of Smalltalk/Squeak is all about (and it's something that was never quite completed in our Xerox PARC phase). The Japanese have a small word -- ma -- for "that which is in between" -- perhaps the nearest English equivalent is "interstitial". The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be. Think of the internet -- to live, it (a) has to allow many different kinds of ideas and realizations that are beyond any single standard and (b) to allow varying degrees of safe interoperability between these ideas.

If you focus on just messaging -- and realize that a good metasystem can late bind the various 2nd level architectures used in objects -- then much of the language-, UI-, and OS based discussions on this thread are really quite moot. This was why I complained at the last OOPSLA that -- whereas at PARC we changed Smalltalk constantly, treating it always as a work in progress -- when ST hit the larger world, it was pretty much taken as "something just to be learned", as though it were Pascal or Algol. Smalltalk-80 never really was mutated into the next better versions of OOP. Given the current low state of programming in general, I think this is a real mistake.

I think I recall also pointing out that it is vitally important not just to have a complete metasystem, but to have fences that help guard the crossing of metaboundaries. One of the simplest of these was one of the motivations for my original excursions in the late sixties: the realization that assignments are a metalevel change from functions, and therefore should not be dealt with at the same level -- this was one of the motivations to encapsulate these kinds of state changes, and not let them be done willy nilly.

I would say that a system that allowed other metathings to be done in the ordinary course of programming (like changing what inheritance means, or what is an instance) is a bad design. (I believe that systems should allow these things, but the design should be such that there are clear fences that have to be crossed when serious extensions are made.)

I would suggest that more progress could be made if the smart and talented Squeak list would think more about what the next step in metaprogramming should be -- how can we get great power, parsimony, AND security of meaning?

Cheers to all,

Alan

https://www.quora.com/What-thought-process-would-lead-one-to-invent-object-oriented-programming/answer/Alan-Kay-11?comment_id=177194762&comment_type=2

Profile photo for Alan Kay Alan Kay May 25, 2020

One way to think of some of the motivations here is to look at the problems of “definition” of any kinds of structures above what is directly in the hardware of any computer — where, even today, there is quite a distinction between “processing” and “storage”, and where active “processing” acts on passive “storage”.

For example, the biggest lack in Algol-60 was felt to be “data definition”, and many worked on this, including Hoare and Wirth (to produce Algol-W, etc). This work found its way into both the later Pascal and C languages. At the same time, the massive effort of Algol-68 happened, and this also was about data definition and a type system that could deal with parameter matching of polymorphic procedures to new data types.

A big problem was that “data” could move around and be acted on by any procedure, even if the procedure was not helpful or at odds with the larger goals. “Being careful” didn’t scale well.

Meanwhile, time-sharing and multiprocessing OSs were being developed, and “being careful” did not work at all. Instead, the decision was rightly made to protect entities from each other — and themselves — via hardware protection mechanisms. This allowed processes made by many different people to coexist while being run, and it also allowed some processes to be “servers” — to provide “services” — to others.

Processes were software manifestations of whole computers — containing both processing and state — both hidden and protected.

For example, the process that provided “data services” — for example: banking records — was actually a “computer” that had to be negotiated with. For some users it would only provide answers to questions, and would prevent their attempts to change their bank account. For special others it would allow updating, but again, not directly but through “atomic transactions” that prevented race conditions on the update.

In addition, the updates were not “munges” on a single structure, but internal to the “data services process” a whole history would be maintained using both copies, checkpointing, update logs, etc.

Now the thing to realize is that this — whole processes offering protected services — is really a good idea at any scale. First it allows much larger and more elaborate services to be done safely.

But it also makes things that weren’t safe enough at line by line programming scales to become much more safe.

It allows both useful large abstractions, but also provides a better set of abstractions at low levels of programming.

Simula I was one of the first programming languages to have some entities able to act as whole computers (and from the same sources — Simula also called these “processes”). This got me to try to generalize to everything.

And so forth.

For example, could the number “3” be a process offering services? Could the string “Quora”? Could a picture? A video? Anything at any size or complexity?

Sure! (Because each process is semantically a whole computer, there is no limit to what a process can be defined to do.)

Can we send any process to any other physical computer and expect that it can carry what it means along with it? Yes.

Do we need to be able to do this? Yes.

As I mentioned in my answer, the “math part” of this is easy if you can relax your mind like a mathematician (math is about “relationships about relationships” not pragmatism in the real-world). This provides an absurdly simple idea about organizing everything.

The catch here — as so often with mathematical ideas — is whether they have pragmatic extensions into the real world: in this case: can we run these generalizations fast enough and small enough to allow “simple things to be simple, and complex things to be possible”.

So e.g. “3+4” or “Qu” + “ora” should be the same size and speed as that which is being replaced (and with many new and more useful properties). While the very same descriptive approach should work for entire enormous computer systems.

And the software “processes” should be mappable onto the hardware “processes” (the physical computers) on a world-wide network of billions of machines.

Doing all the design and hardware and software engineering needed to pull this off in the 70s at Xerox Parc took awhile. But it paid for itself many times over in extreme power of expression, compactness, and safe

=
==============

However, it’s worth pondering that a software object only needs to have the *potential* to have any kind of computation inside it to be a universal idea (it doesn’t have to manifest any *reality* until called for.

=
=== First, I really appreciate that you asked this question.

To just jump to your last paragraph: it *is* like having independent subroutines that can call each other, but extended in the form of protected modules that provide “services”, and can do many helpful things internally and safely.

A well designed OOP system will feel as easy as doing a subroutine call for easy things, but can extend outwards to much more complex interactions.

For scaling etc. you want to have the invocation of “services” be a more flexible coupling than a subroutine call (for example, you should be able to do many other things while the service is happening, you shouldn’t have your control frozen waiting for the the subroutine to respond, etc.).

Here I copy a reply to a different comment on this question, that might help.

One way to think of some of the motivations here is to look at the problems of “definition” of any kinds of structures above what is directly in the hardware of any computer — where, even today, there is quite a distinction between “processing” and “storage”, and where active “processing” acts on passive “storage”.

For example, the biggest lack in Algol-60 was felt to be “data definition”, and many worked on this, including Hoare and Wirth (to produce Algol-W, etc). This work found its way into both the later Pascal and C languages. At the same time, the massive effort of Algol-68 happened, and this also was about data definition and a type system that could deal with parameter matching of polymorphic procedures to new data types.

A big problem was that “data” could move around and be acted on by any procedure, even if the procedure was not helpful or at odds with the larger goals. “Being careful” didn’t scale well.

Meanwhile, time-sharing and multiprocessing OSs were being developed, and “being careful” did not work at all. Instead, the decision was rightly made to protect entities from each other — and themselves — via hardware protection mechanisms. This allowed processes made by many different people to coexist while being run, and it also allowed some processes to be “servers” — to provide “services” — to others.

Processes were software manifestations of whole computers — containing both processing and state — both hidden and protected.

For example, the process that provided “data services” — for example: banking records — was actually a “computer” that had to be negotiated with. For some users it would only provide answers to questions, and would prevent their attempts to change their bank account. For special others it would allow updating, but again, not directly but through “atomic transactions” that prevented race conditions on the update. In addition, the updates were not “munges” on a single structure, but internal to the “data services process” a whole history would be maintained using both copies, checkpointing, update logs, etc.

Now the thing to realize is that this — whole processes offering protected services — is really a good idea at any scale. First it allows much larger and more elaborate services to be done safely.

But it also makes things that weren’t safe enough at the line by line programming scales to become much more safe.it

allows both useful large abstractions, but also provides a better set of abstractions at low levels of programming.

And so forth.

Simula I was one of the first programming languages to have some entities able to act as whole computers (and from the same sources — Simula also called these “processes”). This got me to try to generalize to everything.

For example, could the number “3” be a process offering services? Could the string “Quora”? Could a picture? A video? Anything at any size or complexity?

Sure! (Because each process is semantically a whole computer, there is no limit to what a process can be defined to do.)

Can we send any process to any other physical computer and expect that it can carry what it means along with it?

Yes. Do we need to be able to do this? Yes! As I mentioned in my answer, the “math part” of this is easy if you can relax your mind like a mathematician (math is about “relationships about relationships” not pragmatism in the real-world). This provides an absurdly simple idea about organizing everything.

The catch here — as so often with mathematical ideas — is whether they have pragmatic extensions into the real world: in this case: can we run these generalizations fast enough and small enough to allow “simple things to be simple, and complex things to be possible”?

So

e.g. “3+4” or “Qu” + “ora” should be the same size and speed as that which is being replaced (and with many new and more useful properties). While the very same descriptive approach should work for entire enormous computer systems.And the software “processes” should be mappable onto the hardware “processes” (the physical computers) on a world-wide network of billions of machines.

Doing all the design and hardware and software engineering needed to pull this off in the 70s at Xerox Parc took awhile. But it paid for itself many times over in extreme power of expression, compactness, and safety.

=
=== One way to think of some of the motivations here is to look at the problems of “definition” of any kinds of structures above what is directly in the hardware of any computer — where, even today, there is quite a distinction between “processing” and “storage”, and where active “processing” acts on passive “storage”.

For example, the biggest lack in Algol-60 was felt to be “data definition”, and many worked on this, including Hoare and Wirth (to produce Algol-W, etc). This work found its way into both the later Pascal and C languages. At the same time, the massive effort of Algol-68 happened, and this also was about data definition and a type system that could deal with parameter matching of polymorphic procedures to new data types.

A big problem was that “data” could move around and be acted on by any procedure, even if the procedure was not helpful or at odds with the larger goals. “Being careful” didn’t scale well.

Meanwhile, time-sharing and multiprocessing OSs were being developed, and “being careful” did not work at all. Instead, the decision was rightly made to protect entities from each other — and themselves — via hardware protection mechanisms. This allowed processes made by many different people to coexist while being run, and it also allowed some processes to be “servers” — to provide “services” — to others.

Processes were software manifestations of whole computers — containing both processing and state — both hidden and protected.

For example, the process that provided “data services” — for example: banking records — was actually a “computer” that had to be negotiated with. For some users it would only provide answers to questions, and would prevent their attempts to change their bank account. For special others it would allow updating, but again, not directly but through “atomic transactions” that prevented race conditions on the update.

In addition, the updates were not “munges” on a single structure, but internal to the “data services process” a whole history would be maintained using both copies, checkpointing, update logs, etc.

Now the thing to realize is that this — whole processes offering protected services — is really a good idea at any scale. First it allows much larger and more elaborate services to be done safely.

But it also makes things that weren’t safe enough at line by line programming scales to become much more safe.

It allows both useful large abstractions, but also provides a better set of abstractions at low levels of programming.

Simula I was one of the first programming languages to have some entities able to act as whole computers (and from the same sources — Simula also called these “processes”). This got me to try to generalize to everything.

And so forth.

For example, could the number “3” be a process offering services? Could the string “Quora”? Could a picture? A video? Anything at any size or complexity?

Sure! (Because each process is semantically a whole computer, there is no limit to what a process can be defined to do.)

Can we send any process to any other physical computer and expect that it can carry what it means along with it? Yes.

Do we need to be able to do this? Yes.

As I mentioned in my answer, the “math part” of this is easy if you can relax your mind like a mathematician (math is about “relationships about relationships” not pragmatism in the real-world). This provides an absurdly simple idea about organizing everything.

The catch here — as so often with mathematical ideas — is whether they have pragmatic extensions into the real world: in this case: can we run these generalizations fast enough and small enough to allow “simple things to be simple, and complex things to be possible”.

So e.g. “3+4” or “Qu” + “ora” should be the same size and speed as that which is being replaced (and with many new and more useful properties). While the very same descriptive approach should work for entire enormous computer systems.

And the software “processes” should be mappable onto the hardware “processes” (the physical computers) on a world-wide network of billions of machines.

Doing all the design and hardware and software engineering needed to pull this off in the 70s at Xerox Parc took awhile. But it paid for itself many times over in extreme power of expression, compactness, and safety.

=
=================

oop videos
oop videos

Daniel Ingalls
https://www.youtube.com/watch?v=Ao9W93OxQ7U

Kay
https://www.youtube.com/watch?v=QjJaFG63Hlo

links
Noun, Nouns and verbs oop