Could Be
I only sang because the lonely road was long; and now the road and I are gone but not the song.
I only spoke the verse to pay for borrowed time and now the clock and I are broken but not the rhyme.
Possibly, the self not being fundamental, eternity breathes only on the incidental.
-- Ernesto Galarza (1905-1984)
I am not a relativist; I do not say "I like my coffee with milk and you like it without; I am in favor of kindness and you prefer concentration camps" -- Isaiah Berlin
Mailbox:
Incidentally, although it's obviously not 100% accurate [...] having strong intuition is generally a great thing, including in my designing/programming - except when challenged by other programmers, who of course always want an iron-clad logical explanation for everything, and I often don't always know how to explain why something is the right design decision in the midst of a murky complicated haze of details. Frequently, people care more about having good explanations than they do about whether something is correct or works better. -- dm
re: having strong intuition is generally a great thing, including in my designing/programming ... I often don't always know how to explain why something is the right design decision in the midst of a murky complicated haze of details
I hear you, sometimes I feel like those poor kids trying to explain why they like Apple Jacks. "It's just the right way, OK?" Of course, it's always fun when intuition leads you the wrong way, too.
Indeed! Sometimes badly so. But hey, logic sometimes does, too!! Ideally, they are used together. For instance, intuition is excellent at generating hypothesis, while logic is better at testing them.
"A mind all logic is like a knife all blade. It makes the hand bleed that uses it." -- Rabindranath Tagore
-- dm
I would tend to think that intuition is a good way to generate new ideas or designs, but it is pretty useless as a justification. Once you have the design, you have to be able to demonstrate that it is effective - otherwise you don't really understand the design (even if you made it). That's a good way to introduce subtle problems.
Certainly. Which is why, after tersely saying some things about intuition on Feynman Algorithm, I ended with "Such intuitions still need to be tested by logic, of course.".
But similar things can be said of isolated logical conclusions in the absence of a global understanding (the latter, I would claim, is only rarely reached via intuitionless logic); that's e.g. the problem with code that works in foreseen circumstances but breaks in the (many) unforeseen circumstances. -- Doug Merritt
Doug, someplace you endorsed meditation practice. Would you be so kind as to point us at resources for persons interested in giving it a try? Thanks.
There are a lot of different forms; the ones I recommend are aiming at relaxation and greater awareness of one's mind, body, and surroundings, which decreases stress, and long term, can make one more effective in general.
The meditation forms that I think can be especially helpful towards those ends are breath meditation (originally invented as one of the 7 or 8 classic kinds of yoga), where at its simplest one simply focuses on awareness of unforced natural breathing, and "mantra" meditation, where one repeats a word or a nonsense word rhythmically to calm and quiet one's verbal mind by giving it something to do other than chatter all the time, as it otherwise does.
The basic process is to do one of those forms, or both in sequence (but not at the same time), for about 20 minutes, twice a day, on a strict schedule, always aiming to do everything in a non-forced way, always aiming to let verbal thoughts and memories drift away and to focus only on the meditation - which takes practice. Initially, one drifts off into thought or memory and forgets one is meditating. Or one falls asleep. So it takes practice to learn to stay on track.
The benefits are probably pretty correlated with what percentage of the time one stays on track. If one drifts off 95% of the time supposedly spent meditating, benefits will be minimal. If one's mind drifts off only 25% of the time, then usually one will perceive benefits from relaxation for at least the rest of the day. Approximately speaking. If one stays mostly on track every time, for weeks or months on end, one might start getting some very unexpected indirect benefits.
But one has to learn to stay on track via relaxing. If one tries to force it, that leads to frustration, not to staying on track with the meditation.
The long-term effect can be startlingly powerful, which can seem unreasonable for such simple practices, but what's going on is as if we've clenched a rock in our fist all our lives, holding on for dear life, never letting go. That hand will not be very effective for anything else, and the muscles will be in permanent spasm, but since we've been doing that for so many years, mostly we forget we're even doing it.
If we start to learn to relax that hand, by some kind of hand meditation, and let go of the rock, the relaxation frees the hand to be used for other purposes that were not possible before, not because relaxation is so amazing per se, but because it occurs when something very negative has stopped, something that always made it impossible to do many things before.
This is what we all do with our minds: they are metaphorically clenching our thoughts in a never-ending deathgrip, which makes our minds much less effective for doing anything else. So the simple-sounding process of learning to relax, let go of thoughts, and instead just simply be aware of our self, our breath, our surroundings, can lead to astonishing results.
This is not inherently religious nor mystical, and does not require belief in anything to practice, but these practices tend to be invented in religious/mystical traditions, so that's where one finds the most discussion.
For instance, the late Alan Watts, a former Christian theologian who became one of the most famous interpreters of eastern philosophy to the west, discusses an introduction to meditation: www.grandtimes.com
Web search also turned up this introduction; I don't know who wrote it or what tradition it came from, but quickly skimming it, it seems reasonable: Technique: www.angelfire.com Discussion: www.angelfire.com
And there's lots more out there that I could dig up if need be.
P.S. if you see me get irritated on wiki, you can be sure I haven't been meditating recently; there's pretty much a direct correlation.
Doug, I stumbled on this material you have here. Why not share your views in a page like Relaxation Response, so others can benefit/improve/do something useful. Thanks again. -- dl
Doug are you still working these days? You seem to have so much energy here. And if you were frequenting BBS back in the 80s you are almost as old as me :) -- dl
Hey Doug Merritt, I just wanted to complement you on your massive patience. You've shown a lot of class during the Homoiconic Language discussion. I wish I was that calm. -- Dave Fayram
Thanks :-) On the other hand, CC (correctly) scolded me for losing my cool the other day... obviously I haven't been following my own advice (meditating for stress relief). -- dm
Doug, your contribution to Do The Most Complicated Thing That Could Possibly Work nearly brought tears to my eyes, it was so poetic - and accurate. How true that one of our most precious gifts is the ability to communicate need versus want to the client. That and a sturdy Two By Four, of course. -- Marty Schrader
Heh. Indeed. Ah, a fellow traveller on that bumpy road; my sympathies. Thanks.
"Back off, man. I'm a scientist." -- Dr. Peter Venkman, Ghostbusters.
That's just too precious. :) And appropriate. Gonna have to remember that one.
:-) Warning. Personal detail. You implied you were a scientist... PS didn't mean it on Delete Obsolete, love you to bits really. -- Andrew Cates
Thanks.
Doug, at page Thomas Kuehne you suggest that my anonymous quote is by Frank Winkler. I once attributed the quote to him only to learn from him that he is not the originator but could not say who it actually is. BTW, the link to my publications should work just fine for both netscape and internet explorer. -- Thomas Kuehne P.S.: The quote by Rabindranath Tagore is sooooooo true.
:-) Thanks for the note.
Doug, where are you?? Missing our quote librarian :)
Thanks...just busy, is all...
Doug, some time ago you added this remark to the Graphical Programming Language page:
Never mind, I'll just stick with the FBP primitives I invented for the 1983 version of the language in question. I'd still like to see them - can you post them somewhere or point to them? -- Paul Morrison
Doug, in Genetic Algorithm you referred to "The Evolution of Dishonesty". Not having read that article, does it mean that in a "rational community of computing processes", the "natural selection" processes favor dishonest behaviour?
Has it got parallel to human society?
Are there significant counterpoints or additional observations relevant to this topic?
I am particularly interested in aspects related to human behaviour (often irrational and therefore interesting). Thanks for info. -- David Liu
Yes indeed, but not in a 100% pessimistic sense; there is a pressure for some (usually small) percentage of a population to use non-cooperative means, and also a counter-pressure against that percentage growing beyond a certain point. This is a matter of "Evolutionarily Stable Strategies", a term invented by the biologist John Maynard Smith and discussed very clearly indeed in Richard Dawkin's classic book, The Selfish Gene (a title which seems to be widely misunderstood, btw).
So I would strongly suggest that Dawkin's book, as well as the Axlerod classic "The Evolution of Cooperation" - my "Evolution of Dishonesty" was patterned after his title as a kind of homage.
You'll find a lot of other material by web searching "prisoner's dilemma", "iterated prisoner's dilemma", and other such things.
-- dm
Doug, On the off chance (based on a non-statistically significant sample) that we share a sense of humour I thought I'd let you know I've just put up an archive of 600 1850s Punch Cartoons at john-leech-archive.org.uk Some of the ones about attitudes to women and also about children are particularly entertaining these days. -- Andrew Cates
cool...thanks! -- dm
I am not a Wiki Reductionist, since I don't tend to believe in extremes of any philosophy. But I may support the efforts of Wiki Gnomes who are Wiki Reductionists at times, as today (2005-Mar-13).
Re the point you made on MS's page, I replied to what you said in relation to relatively recent squabbles. I've seen enough (in Recent Changes etc.) to notice whether there was significant escalation of the type you've mentioned.
It seems that Wiki has been manipulated into over-concern about its own workability. As a non-reductionist, what long-term measures to promote wiki workability do you suggest?
I don't know what you mean by "over-concern about its own workability", but I think its workability clearly dropped quite sharply a very long time ago. What I believe is necessary are some non-soft measures enforced by social and technical mechanisms implemented by Ward, and that nothing short of that would make a large difference. Fortunately, Ward is now pursuing that path, and that will eventually suffice, quite aside from the precise details of what he does.
How long ago? Also, can you state in comprehensible English how Ward could conceivably implement a social enforcement mechanism for a non-soft measure which would permanently improve workability? All that we know he's doing that's possibly related to that is seeking a voluntary panel to present suggestions to him.
"How long ago?" This sounds like the start of quibbling. To those who share Ward's interest in wiki as a forum for technical discussions, it is clear that wiki has problems today, and they aren't recent temporary problems. Anyone who disagrees that there are problems would seem to be defining themselves as someone who has different goals than Ward and the rest of us technophiles -- and IMNSHO such people would be better off on a wiki where the owner agrees with their goals.
Your wording "quite sharply" seemed to imply you had a specific occasion in mind.
"Can I state..." No, because it's purely up to Ward, and it doesn't matter what my opinion is. I have past experience with administering complex online social worlds, and I know a large space of technical and social algorithms to address the many issues that inevitably arise, so naturally I know what I would do if I were Ward, and I would have implemented them long ago. But I'm not Ward, so it's irrelevant. If you're curious what the solution space includes, google is your friend. There's not a single issue here that post-dates the web -- technical, ethical, social, anthropological, etc; online communities of many sorts are significantly older than the web, and most owners of such have been believers in harder rather than softer solutions.
The point is that I have long believed that action is necessary, and he says he will now take action, and like I said before, *that* is more important than which action is taken. You'll just have to decide whether to support him or fight him (I can't imagine the sanity of the latter, but it *is* a choice).
Okay, but such actions tend to conflict with Ward's original Wiki Principles.
.... a little problem in 3D graphics ....
moved to Too Many Dimensions. I hope you don't mind. -- David Cary
(I've archived the above / previous discussion. Delete at will)
So I'm trying to build a modeling framework and it turns out that I have to build a remote execution package like VW's Opentalk framework. Except that my package seems to have the narrower purpose of duplicating. A name immediately came to mind, the Doubletalk framework. I'm sure that name will inspire confidence in everybody. :)
The only problem is that, once the design problems of Doubletalk have been solved, I don't have any intrinsic reasons to implement it. It's not like I can use it without my modeler / object browser framework. And it's not like I can implement that without the freaking 3D engine.
And while my Graphs framework is itself only 60% finished, that 60% includes just about everything that I can touch. I've got more or less working graphs now. Who needs locking, garbage collection, statistics, or resource quotas when I can't even use my graphs framework for anything yet? Who needs interprocess links when I don't have processes yet? Who needs mutating identities (the equivalent of classes/traits within Graphs) when I only have the default identity?
The bad side is that I don't have anything to DO. Well, not anything that I'm motivated to do. So I'm just wondering how common it is for programmers to suffer from this. My guess is not very often if they have to be constantly reminded of YAGNI.
In an effort to remedy this, if I have an address space from 1-2MB which I share to 5-6MB, what happens on a cache miss? Specifically, if the pager fetches a page in that space, what address will it use for that page? Is this under software control or hardware control? Because if it's the latter then I'd like the pager to use either both spaces or only the original space (the one that's not being used).
And what reasonably available CPUs / architectures would you recommend I use given that I don't want to deal with a lot of arbitrary / stupid limitations? But on the other hand, I don't want to create my own architecture so I think that dismisses FPGAs. And given an architecture, where could I learn about all about it? -- RK
Glad to see you again. -- Elizabeth Wiethoff
I hate to just say "me too", but, well ... me too! -- Dan Muller
That is really, really nice of both of you to say; thanks for making my day! -- Doug Merritt
It's kind of you to offer to answer questions. I think I should probably read my textbook before I open my mouth on questions, though. I don't want to make a fool of myself (like it matters), and I don't want to waste anyone's time, but most of all, I don't want to be a lazy bum who doesn't figure stuff out for himself. I'm sure that over the course of the next several months, I will generate questions I cannot answer nor find an answer for, and that is when I will remember your offer. I also took your suggestion about being more specific on my User Page. I will be expanding it as I think of things with which to expand it. -- James Aguilar
...I think Large Problems Are Community Problems and we should not close ourselves to ideas, we need to be watchful about Started Asa Good Cause Syndrome.
Certainly.
BTW I got anonymous and sometimes baffling attacks here. It almost like angry soldiers shooting at innocent children, when others in their company got caught by guerillas disguised as civilians. Lots of innocent Koreans and Vietnamese got killed by the Americans out to protect their home country. Of course the other side wanted things to happen that way too.
All very sad indeed, no question.
Doug that StartedA... page is not intended to reflect on anyone here, or elsewhere. So how about take a rest, and come back tomorrow to see whether I mean what I wrote here. I have lots of respect for you and so I am surprised that you take it that way.
On second thought I probably caused that confusion because I meant to ask you add some history examples similar to Cure Worse Than The Disease case where people fill in with examples and corrected errors,etc. I apologize if you took it to mean you are the community of one I meant. I am leaving to do some murmuring to myself :)
By all means, let us communicate better. :-) -- Doug
The simulator I had mentioned was called Jugglemaster, icculus.org . I can't juggle at all but I helped port the UI for this to wxWidgets. -- Chris Mellon
I enjoyed Bowman's mind going on My God Its Full Of Stars. -- Elizabeth Wiethoff
Thanks. :-) -- Doug Merritt
I can't believe it. You actually stooped so low as to Disagree By Deleting! Worse yet, you did so with a paragraph that I can easily prove is factually correct by quoting any text on information theory.
Why are you using such tactics? Is that your way of saying you're tired of talking, so you've decided be really rude, to piss me off so I'll go away, is that it? -- Doug
I see you did the same thing on Computation As Signal Processing. -- Doug
Um, no, and apologies if I gave that impression. I didn't realize I'd done away with your Ex Formation para and have since addressed it. FWIW typing while trying to ignore battling five year old boys is not conducive to competent editing - mea culpa.
As for the section on CASP, however, I took your first point so also deleted the sentence that gave rise to that part of your objection. I was hoping this would allow us to move on to the rest of the matter. Please review that page to see that this is so. Disagree By Deleting, or, for that matter, giving you any offence, is very far from my intention. I'm amazed and delighted by your depth. While I might prefer a less adversarial mode, I'm finding this conversation very interesting. -- Pete
I am pleased to hear you make those disclaimers, and thank you also for the compliment. A remaining issue is that, at the same time you said the above, you also made another edit to CASP, and yet my comments there are still deleted. It sounds like you are under the impression that that your deletion would assist with communication and consensus, but as usual when you delete someone's words: no. Definitely not. If that is your aim, suggest the possibility, get consensus, and the other person will delete their own words if they agree, and if they disagree, discussion can continue on that disagreement. So I rather think you should restore them as a further sign of good faith; I didn't restore them myself because I didn't want an Edit War.
I don't believe I've actually indulged in an Edit War since wiki's inception. Not a proper one - have clobbered/been clobbered through over-eager editing back in days before Page History. Hmm. Perhaps back when RK was a more aggressive fellow. But even then I preferred to leave Wards Wiki for a few years than hang around and get sucked in. So please down hackles and let's see what's out there.
After which, sure, let's continue this. Something I was waiting for the right time to say: you labelled yourself a crank. I don't necessarily see that as a bad thing; people who are willing to say that they are, usually are much less so than the people who insist they are not, for one thing.
A man does not create an entire wiki devoted to whimsy - well, at present, throttled whimsy - unless he is an unmitigated Rat Bag. I know a very few people who regard themselves as normal - and I don't trust them as far as I can comfortably spit.
For another thing, I don't mind wildly unorthodox views, which is what some people mean by "crank theory". Some such can eventually turn into new paradigms, after all. I do mind people who are stubborn in their convictions and are not interested in learning. I hope you are not in that category; your willingness to continue talking seems to indicate that you may not be, and I am willing to grant more and more good faith as the conversation goes on and the evidence grows.
Ta. In crank-mode I like to use wiki as a way to understand where the ground is. The old Stone Society pages are an example. I used to wonder whether this would come back to bite me professionally. As it turns out it has both won and lost me opportunities, which is better than I deserve, I think. But I believe the danger of cranks is not just that they will turn out to favour aggressive ignorance. Paradigm shifts are often bloody affairs - look at the mess Marx made all unawares.
P.S. Although I may be getting somewhat stale, a few years back I was quite intimately familiar with XILINX FPGA's, so that topic is one of the ones I had intended to return to. -- Doug
I'm not an FPGA expert; they were the main deployment target back in Omnigon days simply because the company didn't get itself in a good enough situation to fab an ASIC. Which is to say, field-programmability isn't essential to my interests. ESOPs are, however, because they can be factored into numerically representable canonical forms. But I get ahead of myself. -- Pete
Doug did I catch you in discussing CasimirEffect type material (ExplanationsInPhysics) here? You are not guilty in Off Topic, instead it is IdontLikeThingsIdontUnderstand pattern at work, that made me want to delete those pages later on :). BTW your previous edits in my homepage were answered, then deleted after I assume you have seen it. If not, you know where to look for my response. -- MicrosoftSlave
You mean the Casimir stuff that was deleted something like a year ago? I didn't have an emotional attachment to it being deleted, else I would have restored it and said something snippy :-) . But don't you think that you're describing a poor reason to delete things? I mean, it's a big, complicated universe, and none of us will understand every aspect of it -- more to the point, none of us will understand everything that other humans know about every subject.
And yes, I followed what you were doing with the deletion on your home page, although I didn't see a direct response elsewhere. Nor do I require one, but if I overlooked something, let me know, and I'll look again. -- Doug
I did not respond to the 1st question (indirectly I have said what I can), I did respond to the second question regarding read-lockouts at Meat Ball.
IdontLikeThingsIdontUnderstand is actually a pattern in the upcoming AngryPattern I intend to research to share with this community. Right now I am focussed on 1) finding out if there are still lots of read lockouts and what to do if that exists; 2) looking for "lost sheep" :)
Your focus is understandable. Just for the record, I personally love things that I don't understand, because it's an opportunity for me to learn more, even if I can only understand a little. "De gustibus non est disputandem"? -- Doug
Doug, since I was the one who originally edited the direct quote from Duff on Duffs Device to which you objected, I just want to let you know that my intent was to make it clearer what the actual e-mail was. In doing so, I didn't change anything except for inserting linebreaks and spaces, so that the complete message would be formatted as code (note that the header and the quotes in it still are). I personally think that kind of editing is allowable even for direct quotes. But since you object to it, I'm not press it - unless I come across the original post including original linebreaks :). -- Aalbert Torsius
There's also a subtler issue of whether Tom's use of doubled backtick matched with doubled single quote should be regarded as original text or just markup to achieve the effect (or potential effect) of symmetrical double quotes.
I don't strongly disagree with any of this, it's just that for one thing, it's a slippery slope, and for another, note that wiki's "diff" is crude, so looking at an edit like that, it's not obvious that only trivial changes were made, not obvious at all.
I'm also a fan of history and archaeology, and it's important to preserve the original forms for reference. I'm also highly disturbed that the U.S. courts have held that it is perfectly acceptable for journalists to use extreme paraphrases that do not even resemble the original, yet still call them "quotes", with no indication that it was not in fact a "quote". Something went really wrong, there. If something has been paraphrased, corrected, cleaned-up, the reader at the least should know this is the case.
So I'm gun-shy on the topic, even though I see you were acting in good faith and with understandable motivations. -- Doug
This is too large a subject for me to really do justice to, so I'll limit my comments to handwaving in certain directions. Also I trust that when you said "easy", that was an accidental slip? This does not tend to be an easy area for proofs in the first place.
For starters, I didn't say Rice's Theorem depending on the halting problem -- very much the opposite, I said they were equivalent. World of difference. Specifically, Rice's theorem can be proven independently, and the halting theorem can be derived from it.
More generally, it turns out that a huge number of results are all interchangeable at a sufficiently abstract level, and all have to do with the ancient Liar's Paradox, of which Russel's paradox is one form, and it all boils down to this: in nontrivial systems that allow both unconstrained self-reference and which have some kind of negation operator (both properties fleshing out what is meant by "nontrivial"), paradox is possible.
It then depends on the details as to whether paradox is also unavoidable, in which case the system is inconsistent, or whether it is simply certain statements that are paradoxical -- which in itself is no big deal, that just gives the classic proof by contradiction, however when the statements involve self-reference, more interesting results can follow: undecidability. It's no big deal to prove "A is false", but it's far more interesting when A is "A can be proven with this system".
Back to the undecidability of equivalence of programs or anything else: there's a scale of difficulty, of course, that includes polynomial, exponential, and then undecidable (to keep things simple). The first two apply when the set of possible solutions is finite. If finite but every possibility must be searched, then it's exponential simply because the number of possibilities is 2^N for any problem of finite size N expressed in bits. This arises in particular when the problem is not a Unique Factorization Domain and has no unique normal form (which are almost the same thing, and sometimes are the same thing, but are not always the same thing). Solutions are a lot easier to recognize if they can be uniquely normalized; that allows comparison of a candidate against a criteria in potentially linear time, whereas typically the lack of a normal form means that the comparison may not even be polynomial, all by itself.
If you have the latter difficulty of a lack of a normal form (which applies BTW to things like graph equivalence), but the set of possible solutions furthermore is not finite, then you have an undecidable problem -- potentially.
It's trivial to prove that there are an infinite number of algorithms that are equivalent to any given algorithm, by induction: on each induction step, add more states that don't do anything useful.
The rest follows. There's no normal form even for unlabeled graphs, let alone programs, so in general you've got an infinite equivalence class. No appeal to the halting theorem is needed.
I'm not pretending to have been formal enough to have proven anything; this is just handwaving to give the flavor of it. -- Doug
[[Thanks, I know all of this, although it is somewhat misleading. But what I was asking for was a direct proof of the fact that program equivalence is undecidable, one that does not ultimately use the undecidability of the Halting Problem in some way. You have not provided any such thing at all. That is what I wanted to see. (All that you have provided is the fact that for any given problem, there is an infinity of algorithms which solve that problem. Of course that is true, but how does that alone establish the result?) I hope you can provide such a thing (in fact, a couple of such things, since you said there were many). I am sure there is at least one, but I cannot think of it. I can follow math if it is presented to me, but I cannot do much of my own.
I say what you wrote is misleading because in explicitly going on about all of these decidable classes of problem, it ignores the more crucial fact, from the point of view of this context, that there are many different degrees of undecidability between decidability and the undecidability of the Halting Problem. There is an extremely complex structure of problems that are more decidable than the Halting Problem is, yet are not themselves decidable. It is true that all creative sets are recursively isomorphic (including the Halting Problem, program equivalence, and the problem of Rice's Theorem (whether or not a program generates an index set)) but there are very many r.e. sets that are not creative and in fact every countable upper semilattice can be embedded into the structure of the recursively enumerable Turing Degrees, of which the Halting Problem's degree is the maximal element.
see Meinongian Logic though, to find one professor of philosophical logic's way of avoiding these problems]]
Oh! You know all this, and "mathematically" you follow this one particular minority speculative unproven philosopher, so you're saying you were deliberately wasting my time asking what seemed innocent questions, when actually you just have an axe to grind. Thanks so very much. Nice to see you cared so much about a mutually beneficial conversation. How friendly and gracious. It really makes my day to spend time answering a question, and then to have it thrown back in my face by someone who wasn't truly asking a question after all, but instead making a point, unbeknownst to me. I really live for days like this. Truly, you have improved the quality of my life. -- Doug
The context was a page where people found the orthodox results unintuitive. I was merely saying that there's nothing magic about the halting theorem per se...from twenty thousand feet, it's the same thing as Goedel's theorem, and I was saying this on a page where people were rejecting the Church Turing Thesis! I specifically was not talking about reducibility, that's a different issue.
Diagonalization is essentially always involved in all such things, although constructive proofs may go even further. Diagonalization is the extension of one of the foundations of mathematical argument, counting arguments/pigeonhole principle, to the transfinite domain, so of course it will inevitably pop up either directly or indirectly in any such thing.
Goedel's results can be proven in many different ways -- all involving diagonalization, usually explicitly so. The Halting Theorem can be proven in many different ways, all involving diagonalization, but frequently implicitly (and yes, sometimes implicitly); nonetheless, you'd have a hard time if you tried to do it otherwise.
I also can see that this would inevitably degenerate into "those are equivalent, you said they were different" -- "I said there were many different that were equivalent" -- "see, you contradicted yourself again, if they're equivalent then they're not different" -- "sigh...."
Finally, I think that perhaps what you're really saying is that you know more about the topic than I do, and you're just politely and indirectly telling me that I made a bunch of misstatements. Could be. -- Doug
I didn't say you couldn't reduce the halting problem to it, very much to the contrary, I said there were other ways that did not assume the halting problem in the first place. Huge difference! My whole point is that these things are all equivalent, but that that is NOT the same thing as saying they all depend on the Halting Theorem. The latter is just plain false; it picks out the Halting Theorem as somehow having supremacy over everything else, which it does not. It is not the normal form to which everything must be reduced if possible.
I have mentioned at least 3 approaches so far that do not need the Halting Theorem for their proof. One is the sketch above concerning lack of normal forms for programs. I previously mentioned that both program equivalence and the Halting Theorem can be derived from Rice's Theorem, which itself is usually proven, unsurprisingly, via diagonalization. Thirdly, I mentioned the Speedup Theorem, which can be proven without assuming the Halting Theorem, and which directly implies the result about program equivalence.
You claim I'm being misleading because I'm not talking about e.g. Muchnik/Friedberg results. I'm not talking about those more subtle issues, it's true. So far I don't see why you think I should be. There are both gross and fine measures of "equivalence". -- Doug
I have no idea how common it is. I'm not sure I've seen such a proof, actually, I just thought it was obvious on general principles that it could be done -- it's a finger exercise at this point in the development of such things.
[Yes, it seems obvious that there should be such a proof, as I suggested, but it was not obvious what the proof actually is which is why I asked. I doubt there are "many", but I can probably figure out one fairly direct one if I want by modifying the proof using the Recursion Theorem].
Oh, and I'd previously mentioned on Goedels Incompleteness Theorem that one can also put Turing Machines or lambda functions into a normed space, where convergence maps to decidability, and then go wild applying all of the vast machinery of analysis concerning convergence issues. Diagonalization is exceedingly well hidden with that approach. "Many approaches", I said. :-) -- Doug
But there's only a hairs-breadth difference. What you know about convergence and computability cann be applied to other things, such as convergence and decidability. Again, just from basic principles. I don't have time this moment to verify, but a 10 second glance makes it look as if e.g. this page applies: www.idsia.ch -- as for what I meant by "go wild", I just mean that, as we know, there's an exceedingly rich body of theory in analysis for dealing with convergence, which in total volume must be something like a thousand-fold larger than what has been developed strictly for decidability, and thus it is very powerful to bring those tools to bear, because one can go wild in using them. :-)
His page may be, I haven't gone back to look yet. But as to the rest, what, am I too hand-wavey for your tastes? It's a classic snake-oil approach, to steal Wilf's term. Set up the right kind of norm, and your problems are solved. All you need is a triangle inequality, and then you can attack the problem, no matter what it is, with all of the tools of conventional analysis -- which is to say, more tools than are available in all of the rest of math combined. You don't seem to have a problem with the notion of doing so for computability; why does the suggesting of doing the same for decidability disturb you? To me, both are pulling the same trick.
I understand that I am not explaining clearly enough, but it would help if you would be more specific about what I need to explain better. Saying that you have "no idea" what I'm talking about can only be an exaggeration, given the level of mathematical sophistication that you are demonstrating (which may well be higher than mine in some areas, seriously, which would be another reason to assist me). For the moment, for specificity, here's one of the things I was referring to: en.wikipedia.org
I wouldn't be surprised if your response to that is "I already knew that" -- well, yeah, exactly. :-)
Oh, and "snake oil" is in the dictionary, but specifically I was referring to Wilf's "Snake Oil Identity" in his book Generatingfunctionology; I meant it in both senses, sort of...maybe 2/3 the nontechnical sense, which motivated Wilf's use of it. You know...a panacea...
mathworld.wolfram.com (uninformative)
It wasn't obvious, else I wouldn't have tried to explain all of the above while simultaneously asking you what you were asking about -- should I, for instance, know that you know all about generating functions, who Wilf is, what the Snake Oil theorem is, because obviously you have a PhD in math? Huh? *If* you do, how would I know that? (And if you don't: most people don't know, after all, including many with an undergrad degree in math). I have no clue whatsoever who you are or what your background is beyond individual items that come out in conversation, how should I know what's obvious to you? And regardless, why fault me for asking?
And your question here should not be rhetorical, because that exposes part of the point, and part of the problem with our miscommunication: The Cauchy-Schwartz inequality is in fact the triangle inequality, viewed at a more abstract level, and that generalization is a true and powerful one, and furthermore, seeing that is essential to seeing why it should be obvious that one can do the things with norms that I'm claiming -- which is my whole theme here, notice, powerful true generalizations in math. Yours so far seems to be "generalizations should be considered guilty and shot until proven innocent." :-)
Maybe you are in academia at that. Out here in the cold cruel world, we only get paid for handwaving, not for having exhaustive bibliographies. :-)
There's a rather large point that you are missing. Since when I look at the tabletop to my right and to my left, I see no such reference, that means you are asking me to do work for you, to go dig one up, compared with just chatting, which is not work. Sometimes I decide to do such work for people, sometimes I don't, but you don't seem to understand that it is work. You keep speaking as if providing the reference is the easy part but chatting is hard, but you've got it backwards.
I did explain the "going wild" part above. That part is easy; once again: I'm just referring to the truly vast apparatus of analysis in dealing with convergence issues, nothing more, nothing less. Maybe it was an unfortunate turn of phrase for some reason; I don't know.
If you'll skip unkind remarks such as the below, I'd be more motivated to look for a reference -- although it is unpredictable when I would find it. I sometimes run across something that I then add to a wiki page where the reference had been requested 6 months prior; I don't always find things instantly (nor am I actively searching the whole time). I don't often travel to a university library (where I could probably find it quickly), and it may or may not be in my own library, and if it is, is likely to be in a paper rather than a book, and my filed papers are sadly disorganized (a step up -- in the past they were both negatively organized and inaccessible).
One thing about all of this. One the one hand, it's understandable when someone says "well, in the absence of evidence, I guess I'll just have to stick with the null hypothesis and not believe a claim". What I don't understand is that you repeatedly show no interest in my attempts to explain (e.g. you show no interest in my mentioning the critical role of the Cauchy-Schwartz inequality), you keep sticking to your guns about wanting only a reference to the literature (which amounts to a desire for the fallacy of appeal to authority). My explanations may be faulty, including being too handwavy, but such can potentially be improved.
I was forgetting: ok, just checked, and my one attempt above to provide you with a reference was in fact a good reference on the topic, I don't see why you have an issue with it (it was something like the first google hit, too, not at all hard to find). This previous section obviously (it has "convergence" in its title) goes to the heart of it: www.idsia.ch -- so whether you understand the paper or not, I trust you will stop giving me a hard time about doing nothing but handwaving and not bothering to be specific nor give references.
[Ooo, it's got the word "convergence" in the title, so it goes to the heart of the matter? Right. Yeah, as of now, I'll stop giving you a hard time -- its obviously not possible to get water from a stone! Bye.].
I have no idea why you're being so rude. You said you talked to a specialist on the subject...print the whole of that paper and show it to him and get his opinion.
This is the kind of rudeness I really take to heart; I'd prefer for you to tone it down, if you're not trying to just make an enemy.
It's better to attempt to overgeneralize and then find you're wrong, then to be too cautious about generalizing, and therefore miss the cool generalizations, especially because so very many generalizations turn out to be workable. Obviously I don't think it's wrong in this case, I think I've seen it done, somewhere -- but even if I hadn't, the norm thing is a standard generalized tool.
Hypothetically, yes, but notice that I've got a good track record so far, even though a limited one -- e.g. you found that I was right about the alternate proof of Rice's theorem -- so I think I can safely deny that charge.
Now that's just unkind.
Note the page Pete just created, Math Pattern Language; perhaps you have some thoughts on the topic? -- Doug
True, but I don't see that as being highly dangerous, at the moment, since failure with proof will indicate false analogy.
"Elliptic curves" are in fact generalizations of ellipses, so it's too strong to say "they aren't elliptical"; better to say they aren't simply ellipses. Historically they arise in connection with things like elliptic integrals, used e.g. to determine the arc length of an ellipse, and further, they are multiply periodic functions on toroids, which generalizes the notion of singly periodic functions like ellipses. So it's not a crazy name.
Doug, you know too much. It is clear now that you are a machine. The game is up, you have been exposed. -- Mike
Heh. :-) No, I'm just an amateur and a generalist. But thanks. -- Doug
Doug,
Please check Ejbs And Distributed Transaction and discussion in On Topic But Not Needed. CC wanted to delete the page, and he is probably right (Eric in his recent post seems to support that). If the page has merit then I hope you can champion its refactoring a bit more.
Reason you are approached is I just noticed in Will's homepage that you remarked that SS did a real good job in summarizing Java (he mentioned EJB). And you are so much more visible and approachable than SS :)
BTW my interests are to -> model a "due process" for page deletion (and resurrection), so I do a bit of facilitation amongst the clashing titans.
-- dl DeleteWhenCooked
Hmm. Well, thanks for asking...it seems to me that, on top of the other issues, the current version of Ejbs And Distributed Transaction is so short that it hardly matters. If there are any sentences in it that you like, rescue them to your homepage, I would think. Costin is clearly correct in his critique that EJB is not the "only possible way", so I'd agree that there is negative value to any such statements, especially unsupported ones (and all I see are unsupported ones); they will only mislead people. Related: EJB is widely considered a sometimes useful but highly flawed technology (not that I'm an EJB expert). -- Doug
Doug, pls see my address at DavidCary homepage, drop me a quick mail & let's have a quick chat (to me quick can mean days) -- dl
a few days ago I was unhappy with the action you have done on pages I had interest. I am over it now (still disagree with you though). And I still think it would be useful to have off-wiki channel open to discuss future differences. -- dl
(taken offline and saved, read already)
P.S. I'm glad you're not angry any more. :-) But seriously, I didn't create the Dangling Link page, that's part of normal wiki conventions! -- Doug
unhappy is not angry. I hope I do not get angry often as I take that word has "lost control" connotations.
On Yahoo (lowercaseofmyinitials)(underscore)australia. Please drop me a line so we have other channels to promote better understanding.
Also I am looking for code to implement Quick Diff For Vb Classic for a wiki like thing. Could not find it a few months ago when I looked for it. Do you have good suggestions?
Thanks from dl
I assume you're asking me because of my comments on Diff Algorithm, but then that's confusing. Presumably you know I'm not a VB guy, so I wouldn't literally know the answer to where to find Diff written in VB, and also, I explained that a minimal diff can be written in roughly one or two hundred lines of code -- a few hours work. Especially if you prototyped it in APL; it's a very vector oriented algorithm, so you could grade up and down and be done in quite possibly 20 lines of APL.
The presentation of the results, as done on a wiki like this, is highly trivial HTML added to the resulting (DELETED, CHANGED, INSERTED) blocks.
So what am I missing? -- Doug
Doug I cannot use APL for this code segment inside a Vb Classic app. But I appreciate pseudocode that is as good as C2 Quick Diff (maintainable and acceptable performance too). I hope it is not a few hours work and already exists somewhere on the net. If not, code in Python or (yuk, Cee) would be better than nothing. -- dl
I did not suggest APL within Vb Classic, I suggested prototyping in APL and then translating. If you want pseudocode, I'm trying to tell you, my description of a small algorithm on Diff Algorithm is pseudocode. But yes, it would require some hours of work. I do not know what to point you to that would save you hours of work, but I would suggest that you may already have spent hours looking, so this may be false economy. Just implement.
Additionally let me note that, for decades now, people have implemented what they call "diff", but which is uninformed by the state of the art even as of 1975, and such naive diff algorithms turn out to truly suck in real world application. The Diff Algorithm page describes approaches that truly work in a non-naive sense.
But I guess what you really want is an off-the-shelf solution. And you can't find one. But you still wonder why some of us are Unix Bigots. ;-) Anyway, I don't do Windows (alas, there have been exceptions), and I don't do VB, so it's possible that, if you're just looking for an off-the-shelf easy plugin solution, you're asking the wrong guy. (It's also possible that the MS Windows world doesn't work that way, and that there is no such easy solution, but I wouldn't know) -- Doug
Thx Doug, you have been real helpful already as I did not know about Diff Algorithm, nor its small pseudocode. I am going to copy your comments to Quick Diff as it could be useful to other people seeking similar solutions later on. -- dl
You're welcome. I forgot to mention, off-the-shelf code most certainly is available in many languages, the problem is that they're all invariably very bloated, for both good and bad reasons, so usually such things are hard to translate because the elegantly simple core algorithm is completely obscured. The GNU diff utility, for instance, is absurdly bloated.
In fact, most GNU code done by the core team is absurdly bloated and verging on write-only code, as I can attest, having attempted with varying degrees of success to make minor bug fixes in many of the packages. (RMS is technically brilliant in some ways, but anti-disciplined in these matters, and has never been exposed to forces that would cause him to form different opinions and habits -- not that this is so different from most commercial-world programmers.)
You mentioned Python. Note that Python includes a standard diff library. See documentation www.python.org , and upon downloading the Python source, see Lib/difflib.py. At a brief glance it looks well written, but at 2000 lines certainly is bloated, despite the industrial strength features it offers, given that it's written in a relatively powerful language. When I said diff could be implemented in 100 to 200 lines, I meant even in C.
So I'm not so sure that it would be easier to translate it compared with rolling your own based on the description in Diff Algorithm. But it could serve as a reference to elaborate on that terse description. -- Doug
Re: Diff Algorithm ...The smallest versions of this algorithm can be implemented in less space than the above text of this article; it's less complicated than it may sound. -- Doug Merritt
Doug, just want to confirm your "pseudo code" in Diff Algorithm is the section on that page that start with "The core of diff algorithms seeks to...". I will try to pass to a "real programmer" to do guesstimate on implementation efforts.
Also I would appreciate you dropping me an email so I can consult you privately about C2. Promise not to abuse the email. Cheers from dl 19May05
Hey Doug, I want to steal a section you wrote here ... (done already) You're both welcome to it, at any rate. P.S. Why did you suppress the wiki page name from being a link? -- Doug
Thanks Doug for the clear writing. As to the link suppression, in my case I don't like those dangling ?marks.. unless of course I want them to be there! -- Ron
Doug, I have invited (and got promises) from somebody to write about APL. I personally think that is Off Topic as it has no longer practical relevance and IT aspects that don't have relevance is "personally Off Topic". Sorry. Note patterns are On Topic but maybe programming patterns don't evolve quickly enough to be discussed.
BTW I have just made a comment in Helmuts page on something you may have a view, I can continue discussion with you here if you want. -- dl
[Many years ago, just before RKFanClub was created, Peter Merel created Decline Of Civility. A completely general title and the only subject of discussion was me. I was quite embarrassed by all the direct attention and aggrieved by many of the things on it. Like unverified assumptions that bordered on outright lying. And PM even managed to make mutually contradictory accusations about me. This isn't anything like a stellar example of dissent, more like a lynching mob, but it would have hurt me to interfere with it and it would have been exhausting to do so, so I let it be.
RKFanClub is very different. I mean, initially I was quite embarrassed with it and didn't dare touch it (I still don't unless something is radically misinterpreted). But by now I've learned to live with it. Not least because I find it hilarious. Some of my best stuff is there and I've reread it several times over the years. I'm quite proud of being able to write well. Playing with words is next of kin to playing with thoughts. It's design and I enjoy that.
By now, that's probably a major reason why I get angry so much when I get attacked; because it throws off my writing. I LIKE being able to replace a knee-jerk insult with an elaborate explanation that captures exactly why the other guy's position or argument is blatantly stupid. It means that after the heat of the moment is long gone, I can be proud of the contribution instead of embarrassed by it. And when I'm attacked, I don't have time for that. So not only do I get angry, but I get angry because I'm angry; not a happy cycle. And I hate adrenaline rushes in the first place. -- RK Delete at will]
Duffs Device is still in need of some cleanup. If you don't have the time for it, I may still have a copy of my version lying around. -- MS
I kept a copy of your version. After our agreement in principle, I got stuck on the question of doing a block quote. The ":" quote on c2 is broken for many browsers (including mine); it displays as a single infinitely long line. Ideas? -- Doug Merritt
I wasn't aware of that issue; I'll have to keep it in mind for future editing. I suggest using monospaced text. It would probably be a more accurate reproduction anyway, considering the age of the quote and that it came from Usenet. -- MS
Which platform/browser? Does it accept blockquote (i.e.,
On Chris Mellon's homepage you wrote,
You know, the ambiguity in the word "hard" is interesting. It can mean "conceptually easy but time consuming", or it can mean "conceptually difficult but easy once you get the idea", or it can mean both or neither.
I think that, psychologically, most things seem conceptually easy once you understand them thoroughly, which means that, with sufficient background, most kinds of programming are "easy but tedious", in one sense. But there still remains the sense of easy versus hard relative to e.g. how long it takes to do something.
Good to keep in mind in future discussions. -- Doug Merritt
Does this apply to C++ and if so then in what way? Does C++ actually seem conceptually easy to C++ experts or do people never learn C++ with the kind of thoroughness necessary to make it seem conceptually easy? -- RK
It applies to C++, which has aspects that fall into every part of the spectrum. Some parts are easy but tedious, some parts are hard to understand at first but then easy, and still other parts are hard to understand and tedious. Most C++ programmers learn to use certain aspects of the language and to avoid the "dark corners" Dan mentions below, and to even forget that there are such dark corners. It is a huge, truly enormous language, and is difficult in every sense of the word to truly understand all of it thoroughly (which of course I disapprove of). -- Doug
Speaking for myself, most of C++ eventually became conceptually easy for me, barring a few dark corners. But certain aspects of the language continued to make some programming tasks tedious. -- Dan Muller
If you ask me, those dark corners are much more numerous and voluminous than you're implying. Absorbing everything in e.g. the ARM, Meyer's "Effective C++", Cline et al "C++ FAQS", Austern's "Generic Programming and the STL" is one thing (and a hell of a lot already, if you think about it), but going beyond that and grokking everything in (...mental block, I'll have to fill this in later...) is quite another; how many C++ programmers do you think really do? -- Doug
Not too many. However, most of the dark corners aren't an issue for most of the work. The worst ones appear where namespaces interact with operator overloading and templates, and in advanced template use (template overloading et al). These areas are mainly encountered when writing sophisticated libraries.
If you omit templates and namespaces, everything else is fairly straightforward, IMO. The language mechanics are low-level, but not hard to understand. When you add namespaces, you can (but usually don't) run into a few subtleties. Simple use of templates is also very straightforward, and worries me not at all.
Oh, almost forgot exceptions. Writing good exception-safe code is hard -- but mainly because error-handling is hard, and the language requires explicit memory management. Exception handling poses subtle problems in all languages I've used that have them, so I don't hold this against C++.
Note that I'm talking about the standard language, which was only really fully supported by the most popular compilers starting a few years ago. Using advanced language features before that was, umm, interesting. -- DanM
Well, but even taking all that at face value, you can't omit templates, namespaces, and exceptions, you just can't. I do admit that I reached my emotional breaking point a few years ago when things were "interesting", but some issues are unfixable, such as the way that template names expand as nested macros into arbitrarily long and complicated strings (as I believe you've heard me complain about before :-) And of course, the bottom line is that, yes, one can get good work done in C++, but that doesn't mean the language isn't unnecessarily complex -- vastly so. That's what many people like about other languages like Smalltalk and Lisp; the libraries may grow without bounds, but the language itself is small enough to grasp in its entirety.
Did I mention the amusing story some years back, when I was consulting in the C++ compiler group at HP, and the head of the group was heading off to Tokyo for a week, for the annual C++ standards group meeting? I said I was astonished he cared enough to do so, rather than delegating to someone else, considering that he was one of the truly busiest guys I've ever worked with. He explained that he was going in order to plead with them to not add any more features to C++, and that doing so was the best use of his time that he could imagine. :-) -- Doug
God bless him. :-)
I didn't say that one ought to omit templates, namespaces, and exceptions, but only that the complexity that they add is moderate, IMO. The worst stuff is truly in the advanced template techniques. The template name expansion problem is at least partly a quality-of-implementation issue.
You can't omit them because they are tightly integrated with the entire language, now, and because even minimal good practice requires explicitly addressing them. The fact that they split into an easy part and a worst part that is more advanced seems like a tautology, don't you think? The name expansion problem is not quality of implementation, it is inherent and unfixable. If you truly doubt this, I'll dig up a reference.
But, hey, you're preaching to the choir here. As you already know, I consider Common Lisp far superior to C++ in most respects. It makes some of the core concepts more nearly orthogonal (e.g. encapsulation, class definition, polymorphism, namespaces), and omits some (at least logically) unnecessary complications (overloading versus overriding, i.e. run-time versus compile-time and 'punning' polymorphism, and explicit memory management).
Sure, but one has to address the context at hand.
I'm not very familiar with Smalltalk, but I'm skeptical about it because it is based strictly on the OO model that is now considered 'traditional' (i.e. the methods-belong-to-objects model, in contrast to the multimethod model). For advanced programming, I'd choose Lisp over C++ in a heartbeat if I had my druthers. -- DanM
You may recall that I, too, believe in multimethods, and in fact in multi-paradigmicity. So Smalltalk is not my absolute favorite language, but is nonetheless worthy of high honors for being a simple yet powerful language. The actual standard language is even simpler than the ancient Lisp 1.5 core language. Simplicity is a virtue. Not the only virtue, but certainly a virtue. Lisp and Scheme have veered in peculiar directions (for reasons that are understandable but still unfortunate), so I admire them somewhat more in the abstract than in the reality, but that's because I'm a language designer, and can't be absolutely happy with anything ;-) -- Doug
Doug, is this you?
Yes. :-) -- Doug Merritt
We're not worthy! We're not worthy! We're not worthy! :)
LOL! :-)
Doug, you're inserting non-ASCII characters, such as "can�t" when you edit. See How To Pervert Direct Manipulation for an example. Damn MS Smart Quotes :)
Although I did switch to Fire Fox recently, and am not 100% familiar with it yet, I believe you're mistaken, because I saw that garbage there on that page before I touched it...also your quoted "can�t" appears to me as garbage chars (double dotted I, inverted question mark, 1/2 fraction symbol), not smart chars. -- Doug
Yeah, it's my fault. -- RK
Doug, here's a random thought I had recently;
about 10% of the population are political extremists, whether on the right or left. About 10% of the population are concept users, able to see similarities and draw inferences between widely different fields. In both cases, you've got people with a bent towards reductionism. -- rk
Interesting. And so, what underlying phenomenon have you reduced these two things into? ;-) -- Doug Merritt
:) I think the rules- vs concept- using is the fundamental.
But actually, I'm now convinced that I must have been wrong. It has to do with the difference between law and justice.
Laws are rules. Ethics and morality are heuristics. Justice and human rights are concepts.
And the defining characteristic of left- vs right-wing is in their allegiance to justice over laws, outcome vs process. Unless we accept a different definition of right-wingism such as selfish evilmindedness. -- RK
BTW, do you recall a discussion we had about physics where instead of going in depth into one field and learning all the rules, you could learn in breadth and learn all of the concepts involved? Though this predated my making the distinction between rules and concepts, and also my reading of Against Method where Fayerabend writes "superficial thinker" in an admiring tone. You used a big word to describe the latter form, something like taxonomic ... do you recall what it was? Not exactly a burning issue but it would be interesting to reread it. -- RK
I vaguely remember the discussion, so I just checked some pages where I recalled talking to you about physics back in spring/summer 2004, but alas, did not see the discussion in question. Your comment at the bottom of Against Method about value judgements is sensible, at least in the right context -- but none of this is jogging my memory of precisely which concept you're talking about that I used a sesquipedalian :-) word for.
Lemme know if you think of something else to jog my memory. -- Doug
I think it was actually "taxonomic" in (Never Make Knowledge Prerequisite To Understanding). :)
Aha, that page, right, couldn't remember where that discussion took place, and had forgotten parts of it. I have continued on and off pondering of the final topic there, but still don't have a coherent new thought to relay. -- Doug
Years ago, I used to think that understanding == possessing concepts. Or better yet, Real Understanding(tm) == possessing concepts for the subject matter. And so I could say with a straight face that expert physicists in the same league as Hawking hadn't achieved even the small understanding of their own subject matter that I did.
Now I'm much more inclined to think that possessing concepts is just my personal yardstick for understanding, because I'm a concept user, and wouldn't apply to a rules-user. But that raises the question of whether organizing rules around concepts, and teaching the concepts, has value to rules-users. Because I'm inclined to think that it does but for obvious reasons I have to conclude that I really don't know.
I also don't know if the word understanding has any meaning to rules-users. IF I'm right in my theory that rules use corresponds to the mindset that allows most bright children to benefit from enrichment programs, that is that knowledge is interchangeable regardless of degree of abstraction, then the word understanding has no meaning to rules users grosso modo.
And on a different subject, are you primarily a rules user or a concept user or somewhere in between? Assuming there is an in between, which is something else I don't know. -- rk
Another thought that's been running endless circles in my head has been knowledge mapping. It's getting to be a really big problem to establish everything that's known in a field, what studies support or contradict any given fact, what facts contradict other facts and so on and so forth. It's gotten to be such a big problem apparently that you can discover groundbreaking new facts in a field just by appropriately mapping it. I really want a physics wiki at some point just to map physics. -- RK
Quite so. In fact I have a specific example. Circa 1995 I stumbled upon a new PhD thesis, "Taxonomies and Toolkits of Regular Language Algorithms", Bruce W. Watson, (Eindhovenhoven U. in the Netherlands, but in English). He did exactly what you just said, created an extremely careful and detailed taxonomy in the area for the first time, implementing every variant in the resulting taxonomy in a toolkit, and by doing so, uncovered some blank areas in the otherwise regular tree, which allowed him to read off the attributes of the un-invented algorithms simply by their placement in the taxonomy, making it easy to invent and implement them.
(This doubtless is some kind of ammo on the "discovered" side of the ancient "invented vs discovered" debate, but never mind.)
Although no doubt it's arguable whether this was groundbreaking, nonetheless I brought it to the attention of a friend who was organizing a conference in a related area of computational linguistics algorithms, who found the work interesting enough to invite the author, resulting in the newly-minted PhD giving an invited paper at the small-ish conference, included in the Cambridge U. Press proceedings publication, so it wasn't considered trivial work by any means. -- Doug Merritt
You're assuming that taxonomies are discovered. There's lots of people would disagree. :)
RK, it may sound like I'm assuming that, but actually I was being careful (which I hope might be clear upon a re-read) not to assume either side of discovery versus invention, since naturally I'm aware that "lots of people would disagree" with either position, and it's not an argument I want to take a side on currently. :-) DM for RK
Apropos of nothing, I've been thinking about language drift. Take the proverb "He who lives by the sword dies by the sword" which is nowadays used to allude to karma and cosmic retribution. But for all we know, it is very likely to have been meant in an extremely literal way, "soldiers die in war", and not any sort of deep statement or anything. -- rk
In general I agree, but I have reason to think that that particular phrase was in fact originally coined with the intention of its modern sense of irony, unlike other coinages where meanings have changed, such as "the exception proves the rule", which of course these days is typically interpreted oppositely of its original meaning, since "prove", there, originally meant "tests". Similarly with "moot point". Yet not all ancient expressions have changed meaning. -- Doug for RK
Doug, I have no internet access at home for many months now, and apologize for not checking for your reply (and I miss a lot of recent changes even when I am here). I did check email before I wrote that query and no response then. Sorry I'll go read your reply somehow next few hours in a cafe, and let's go from there. -- David Liu Delete When Read, but rehighlighted material from DM to RK
David, please do not ever "rehighlight" or otherwise change other peoples signatures. You've been doing that in various ways for ages, and I really, really, really dislike you doing so. If I can't get you to stop doing that on other c2 pages (and I know other people have also asked you to stop doing so), I CAN damn well insist that you not edit my HOME page in that strange way.
You may ADD comments to my home page (or delete your own comments, of course, if you feel the need, although I hope you do not).
You do not have my permission to edit anything else at all on my home page. Additions only. I trust that is clear enough for you. -- Doug Merritt
P.S. I note that some Wiki Gnome, by a coincidence of timing, edited multiple paragraphs here around the same time, and I have no problem with that, I'm not talking about standard Wiki Gnoming. I'm talking about highly non-standard non-Wiki Gnome edits to my (and others') paragraphs, such as adding/changing/generally fucking with/ attributions. -- Doug
Doug I have noticed in Nov05 you were involved in a difference with Jonathan and I hope neither of you let the differences on VI viewpoints affect your future relationship here. I have continued to update Third Generation related information and I value your contributions there as well.
Boulder Patterns Group was lasted edited by you. I am curious as to why the Wall Garden could not have a few doors made to link to rest of C2? I was not here and it looked as though we have driven away good people with interests in patterns.
You are in no position to talk about driving people away, given the wide criticisms you have received. However, for the sake of the argument, let's say that I personally drove away 100% of the people who were interested in the content of the page Boulder Patterns Group. If so, then that would be a good thing -- because it WAS in fact a walled garden. Did they tell us what cool Pattern stuff came up in their meetings? No, they did not. That might have been a way to make that page on-topic rather than off-topic. As things stand, however, it was just about meeting times for some random group meeting that, due to geography, could only possibly interest a tiny number of people, and could not possibly be of interest to the wiki in general. Off topic is off topic.
Doug I apologize to get you to interpret my comments as criticism. And I did not know what happened there so I do not know you "may" have sensitivities. I am not here to judge people and apologize again if it is seen that way. And I am happy with your explanation you have given.
It is very telling that you have so much difficulty figuring out why something is a Walled Garden, or why Walled Gardens are bad. It would be a very good thing if you would do better to figure out that whole issue, of what is on topic here and what is not, and why -- and I don't mean "invent your own answer", I mean, work at figuring out the answer that the community came up with.
You have more experience than I do. Can I ask you whether you consider Walled Gardens are worse than Off Topic? If they are not worse, then my views are expressed in More Light Than Heat Guideline (If the people are more On Topic than Off Topic it is ok). Feel free to add to that page so I can learn from the criticisms.
Lastly, when I was at The Adjunct I tried to converse with you there at Noncommunicative Page, re: the example problem page. If you promise to keep your cool we can have continuation here, since I do not have time for The Adjunct at the moment. -- dl
You said: You violated. No gray here.
Doug, I am now thoroughly confused as to what exactly it is that I have done that has no gray about it. I have now clarified a possible misunderstanding on Windows Partition Page Discussion, but all my recent comments were designed to say I am backing off. I acknowledge that my page Thought Police was wrong, which I have apologized for that there. I have agreed to its deletion. I don't think that was the point you were making.
As I see it I argued a case against a snap judgement about the page. I was not the person who either deleted or restored the page. What I have experienced is a lot of judgementalism about what is the wiki role, focussed on me because I argued for more tolerance of what was perceived as not relevant. This is not sarcastic, but a request for clarification. -- jpf
I have put this here as it is a comment to you. Feel free to reply on my home page.
Doug, thanks for the reply. One part of it (about signatures) was not me, please note. -- jpf
Doug, on my own home page I put some hours ago, the following - "Thought Police is now revised and has a separate life." You are not acting for me in deleting the current one. Please discuss this with Gunnar Zarncke. -- jpf
You said delete. I deleted. I overlooked that you changed your mind. "You are not acting for me " blah blah... this is basically annoying; you're the one being wishy washy. Instead of implicitly criticizing me for not noticing your rapid change of mind, you should be saying "sorry, my fault". -- Doug
Doug: As a comparatively uninvolved bystander I think you and John have repeatedly been talking past each other. I may have missed things - I'm sure I have - but I haven't actually seen him change his mind, apart from agreeing with you that Thought Police in its original form should be deleted. I have seen others write things that you have appeared to assume came from John, and I think that has muddied the waters, especially since he tried to "fix" things, so they all got more confused. As with all these things, trying to dissect and analyse them now is pointless as they are gone, there is no audit trail, and I'm not sure anyone would learn anything from them. With that in mind, my impression is that you are both misunderstanding each other, and the whole thing should be ignored. Assume Good Faith, move on, and see what his next contributions bring. That's how it looks to me, and I freely admit I could also be wrong. I'll take my own advice, bow out, and not try to pick apart what's happened.
Oh, I agree. All that confusion is of course all the more reason not to criticize; things are too muddled. That's why I was annoyed, I was just trying to move constructively in what I thought was an agreed-upon direction. -- doug
Thanks for that. I've had explained some of the difficulties wiki has encountered while I have been away, i.e. doing other things. Thank you for the work being done to protect what we all have. -- jpf
Thanks for the kind words, and sorry I was testy with you. -- Doug
Thanks for that. What do you think of the Clifford Algebra pages? Is it O.K. with you if I refactor the relevant bits of your comments on Clifford Algebra from my home page into the new page called Clifford Algebra Discussion? -- jpf
I like what you're doing; please feel free. -- Doug
I have restarted the Clifford Algebra process after a break, as you have discovered. I do not have an answer on Wick Rotation, which as I said before somewhere is a new term for me. I rather like the balanced algebras, which are much discussed in some references, e.g. Doran (whose Ph.D thesis is or was available for download from the Cambridge site). In this case it provides a possible outer envelope. On quantum mechanics you are out of my field. Hestenes has something to say in the lecture. -- jpf
You don't have an answer...for what?
Integration and interpretation in terms of Geometric Algebra, beyond the general. There is something in some of the references on the transformation between the different four dimensional forms.
My point about QM is a simple one, for someone with your background, out of your field or not: unlike classical statistical physics, in which probability is a primary notion and is the value of a random variable, in QM probability is a derived notion, it is the square of a "probability amplitude", or equivalently, a "probability amplitude" is the square root of a probability, and since probabilities can be negative, probability amplitudes are generally complex.
O.K. I had not appreciated that.
This is roughly a complexification of probability along with the observation that squaring the complexified thing turns out to yield non-complexified probabilities. I don't mean to make any larger of a point than that, so further memory of QM isn't needed beyond that definition. -- Doug
Section VII in Hestenes Oersted Medal Lecture is I think setting out a different interpretation (starting on p.26) which does not involve probabilities. That was the thing which I wanted to bring to your attention, rather than get into an argument about it. I have not seen any comment on that from any other references. Geometric Algebra For Physicists (2003) does not refer to Hestenes Oersted Medal Lecture (2002). Probably it was not available when they went to press. Incidentally, on p.28 in Hestenes Oersted Medal Lecture there is something which is also elsewhere, that the Pauli matrices have the same relationship as the three basis vectors in 3D Clifford Algebra. It seems as though there is not clarity yet in the physical interpretation area. -- jpf
I'm not arguing, either; my notes about probability amplitudes were purely on the subject of complexification, not about Clifford Algebra per se. Complexification merely means replacing a real variable with a complex variable and then exploring the resulting system; as such it is related to a large number of topics, including Clifford Algebra, but is in another sense also independent of those other topics. So I was just throwing out a few examples. -- Doug
All through Hestenes Oersted Medal Lecture, he is arguing that any complex number needs to have a physical meaning found for the i. -- jpf
Sounds good to me -- although in some cases no one yet knows what the physical meaning is. -- Doug
Delete When Read - Thanks for your contributions to the Clifford Algebra discussions. I plan to resume the work after Christmas. -- jpf
Thanks...I look forward to it. -- Doug
Thanks also for your kind comment on Clifford Algebra Inverse Discussion. I agree Clifford Algebra is not a division algebra. -- jpf
Thanks Doug, and Happy New Year. -- jpf
Symbolic AI has reached the toddler stage, much better than dog levels.
Highly debateable. In the absence of any reference, I assume you're referring to Breazeal's "Kismet," (www.ai.mit.edu ) which is arguably a clever application of pattern recognition but not something that even comes close to achieving the breadth of a dog's intelligence, let alone that of a human toddler.
No, I'm referring to the SYMBOLIC AI Cyc. Why everyone in a forum of programmers who know Lisp seems to assume AI means neural networks even after the word symbolic is thrown in is unimaginable.
What neural networks? Kismet was a logical assumption, because its design was influenced by -- and intentionally resembles -- human infant behaviour. As for Cyc, it's a knowledge base coupled with an inference engine. It has no more reached "dog levels" of anything, let alone "toddler stage" -- for any reasonable definition of these -- than any other expert system. "AI approaching at least Dog Levels" is still a long, long way away.
Great, now we're going to argue over the merits of cognition over pattern matching. This is exceedingly lame.
[This goes way over the line. NOTHING humans have produced approaches dog levels of cognition. Cite just one, if you insist on bucking that well-known fact. Counter-example: dog vision. (And I'm having trouble thinking of anything Cyc attempts that dogs do at all, so that particular claim is especially baffling.) -- Doug Merritt
Vision is certainly not what most people understand by the word 'cognition' which is generally the higher level functions. What the hell are you even talking about?? What's this obsession with neural networks??? As for dogs, dogs don't think and they don't reason, parrots are more intelligent than dogs are. What kind of higher functioning can a dog possibly do?
As for your "bafflement" that AI research pursues higher-level functioning independently of lower-level functioning, well considering that this is the traditional path and that neural networks are a relatively new thing in AI research, I really wonder what the hell you're smoking! You really need a big slap across the face to wake you up from your dream fantasy when you say that it's "over the line" for someone to point out the traditional divisions in AI research. And it certainly is a fantasy since you're contradicting yourself in the same paragraph. You demand that I cite an AI that does things at least as well as dogs do and then you dismiss Cyc for doing things far beyond the capabilities of dogs. What the hell kind of ideological knee-jerk reactionary crazy ("bucking that well-known fact") thinking is this? This is really disappointing Doug. -- RK
Hang on...I disappointed you, which irritated me, so I wrote the following in an irritated mood and therefore irritated tone, which doubtless will irritate you right back...hang on while I write something about the background of my irritation, before you respond. Give me a bit....Ok, done.
No -- the problem is that you are not using standard technical terminology, and have not studied cognitive science as a technical field, so we're talking about different issues. "Vision" is cognition to cognitive scientists; I could care less what the man in the street thinks.
Yes, CYC can do things that dogs cannot, but look at what I said: "...trouble thinking of anything Cyc attempts that dogs do at all". The things that Cyc does better than dogs, dogs don't do at all. I'm still trying to think of something that dogs do (naturally, in the wild) that Cyc attempts at all.
I did not use the term "neural networks"; I only just entered this conversation (unless I wrote something months ago and then forgot).
You can't logically be disappointed when I speak on the subject as I did here; it's a technical speciality of mine (including animal cognition), and I am expressing facts, not opinions, so if I contradict you on facts, it's time for you to do a Book Stop. If I offer an opinion, it'll be clear that it's merely an opinion. For instance, I think that strong AI is possible. That's my opinion, not a fact, so I could be wrong about that (although naturally I don't think I am). The stuff I said above, by contrast, is simple fact. -- Doug Merritt
Actually, the possibility of strong AI is a fact deriving from scientific materialism. Anyone who denies strong-AI (like Roger Penrose) is acting in an extremely anti-scientific manner. You can't deny one of the fundamental assumptions of science without evidence without losing all credibility as a scientist. Nix that, you shouldn't be able to, but for some screwed up reason Penrose did get away with it. Partly this is because of politics and the other part is for the same reason that "The Axiom of Choice is obviously true; the Well Ordering Principle is obviously false; and who can tell about Zorn's Lemma?".
Background for my irritation: I know you're a smart guy (here in Silicon Valley, bright people are a dime a dozen, so when we say "smart" here, that often means "creative genius" to the rest of the world, so that's not damning you with faint praise), you're widely read (in some areas, like child care, apparently much better read than I am, and I was meaning to ask you for references, and in other areas, clearly less well read, but my point is not to criticize you, just to tell you it's obvious you're widely read), and that you have spent a very large amount of thought on a wide range of issues. That's all stuff that I appreciate about you.
I am completely willing (as a matter of personal interest and philosophy) to be very open-minded to:
Ideas you have for improving things in some/any subject matter
Critiques you have for the status quo in some/any subject matter, even if you haven't come up with a better plan as yet (and recognizing that you might in the future).
...after all, I myself have critiques of at least some aspect of most traditional approaches to most topics, so I'm willing to believe that you might share some of my own critiques, and might have thought of some that I have not.
Then, due to old conversations between you and I, I am slowly (I have no reason to hurry) considering, open-mindedly, your suggestion that conceivably all topics can be approached both qualitatively and quantitatively, even the subjects that are nominally quantitative, by analogy to old-school descriptive (taxonomic) biology versus the various more recent forms of biology that go beyond the purely descriptive. You raised a reasonable doubt in my mind on that topic, and it touches on...just about everything, so I continue to think about strengths and weaknesses of that argument across the board.
Ok, then we get into areas where it is harder to be open-minded. A common one is that you usurp technical terms. For example, I can be open-minded about expanding the term "operating system" to be something larger and more all-encompassing than the usual technical definition, so I'm not completely closed-minded to you redefining that term. BUT that introduces communication problems:
You unfortunately tend to throw a wrench into the works, by not just expanding the definition of a term such as "operating system", but by making your definition mutually exclusive with the definition accepted by millions for decades, as you did recently when you insisted that "scheduling algorithms have nothing to do with operating systems".
Now, I can be open-minded about you expanding the definition of terms like "operating system", but when you say things like that, that make your definition of the term mutually exclusive with the understanding of millions of other technical people, that only hurts communication.
Millions of highly technical people understand the term "operating system" to include issues such as "what algorithm(s) to use to schedule tasks/processes", and that definition/understanding is not arbitrary: any system that has pre-emptive tasks/processes must have a scheduler that uses some kind of scheduling algorithm.
That is a matter of fact, not opinion, and I'm very sorry, but you lost credibility when you said that, because it implied that you did not understand standard operating system theory. It's fact, not opinion, because as a simple matter of logic, multiple threads/processes must be scheduled, somehow, whether you have 1 processor or N processors. It's a logical inevitability.
You may mean something different than that...but if so, that's the problem with usurping terminology. You lose credibility with 100% (not 50%, not 90%, not 99%...100%) of your technical audience when you say things like that. That's the danger of usurping terminology. We know that scheduling algorithms are an essential part of the traditional notion of "operating systems", and you can't just say that such things are beneath your concern. They have to be addressed, by scheduling algorithms, somewhere in the design of a system, never mind where. It's a logical necessity, and to imply otherwise merely comes off very badly.
There's also the issue that, if someone like, oh, say, Doug Merritt has demonstrable expertise in designing and implementating "operating systems" under the traditional definition of the term, then if someone who has not done so comes along and says "I'm redefining the term "operating system", and as a result, you have no expertise in that subject", then guess what, people like Doug Merritt are going to get really, really pissed that you claim that we have no expertise in a topic where we demonstrably have years of theory, practice, and pay.
How am I supposed to be open-minded to your definitions, when your definitions deny that I know anything at all? You are in no position to deny my expertise by traditional definitions, and there's no reason for me to be open-minded about new ways to redefine terms that might deny that I have any basis for making a living! Don't you understand that you make it personal when you redefine standard terminology in backward-incompatible ways???
Stick to expanding definitions of terms, and I will continue to be open-minded. But if you insist on completely redefining specialties, then, let's see, you will then deny that I know anything about 100% of the subjects that I absolutely busted ass, for many, many years in some cases, to learn and to apply in the field, and for which I have been paid, as employee and as contractor, for startups and for Fortune 500:
operating systems
compilers
Computer Human Interface and graphics
CPU and overall hardware design
AI (I shipped PRODUCT, damn it), backed by cognitive science theory
(examples, not exhaustive)
...as well as "hobbies" of mine that I have not been paid for, but that I paid to learn with blood, toil, sweat, and tears, such as:
theoretical linguistics
fundamental physics (e.g. quantum, special and general relativity)
a wide array of pure and applied mathematics,
(examples, not exhaustive)
...lots more, I'm a generalist, but I read primary research papers.'
You can't do anything more than piss me off if you insist that I know nothing about fields that I studied, with great effort, for years, and I can't do anything more than discount your opinions when you claim I know nothing about areas where I acquired expertise at great cost, when you obviously have not.
So damn it, I appreciate your creative genius, but stop pissing me and others off by denying that we know anything, when we damn well know that we do know something. We paid for it in blood sweat and tears. Haven't you ever experienced that same toll yourself???? -- Doug Merritt
I'm putting this here because 99% of it by now doesn't belong on Technology Disappointments, and because there's way too much formatting.
Okay, correct me if I'm wrong but,
artificial intelligence includes both pattern recognition and logical inference
traditionally, AI research has been focused on logical inference
to claim that AI is "tardy" in delivering pattern recognition that approaches that of a dog is to imply,
dogs perform meaningful logical inference (otherwise a better animal example would have been a rat) and/or
the goal of AI research has always been to do pattern recognition
both of which are wrong
so it's reasonable to point out that AI research is not so unaccomplished as the example implies
Please explain exactly where I lost the thread.
Now turning to OSes,
any system that has pre-emptive tasks/processes must have a scheduler that uses some kind of scheduling algorithm.
Yup, every OS has a scheduler. So what? Every AI is made of matter in this universe. Why? Because you can't do computation without matter. That doesn't make matter (let alone atoms or ions in particular) an intrinsic part of an artificial intelligence. The same thing with life-forms. Do you see the word 'matter' anywhere in the Definition Of Life? It isn't there because matter has nothing to do with what life is, even though every form of life in our universe must exist over some kind of matter-based substrate.
An OS is going to have a scheduler, the same way it's going to run over a CPU. That doesn't mean that the particular characteristics of that scheduler have anything to do with the characteristics of the OS. What kind of scheduler to use is partly a societal and partly an engineering decision, it certainly has nothing to do with design. When Costin accused me of never having created a scheduling algorithm, I was perfectly within my rights to cut the entire subject out of the discussion. The particular characteristics of the OS's main schedulers have just about nothing to do with macro-characteristics of the system, and the characteristics of the user-level schedulers are much more diverse and open for experimentation. Further, since it's MY role to cater to as wide a spectrum of users as feasible, I don't see any particular reason to cast economic policy in stone. If some fascist wants to create a fascist scheduler, I haven't any problem with that. Economic policy is not my responsibility.
it implied that you did not understand standard operating system theory
I understand perfectly well what the technical view of an OS is, I just don't share it. The technical view is aimed for an engineering mindset. With an engineering mindset, it matters what kind of scheduling algorithm you have because it's the engineer's job to create it. Well, it's just not my job.
You may mean something different than that...but if so, that's the problem with usurping terminology.
If I recall correctly, I made sure to define precisely what an OS was. An OS is a complete programmatic base. That means, whatever some programmer is going to see and use, that's the OS. And the innards of the OS are not the OS because the programmer doesn't see them. To prove that a particular scheduling algorithm is part of the OS, you need to demonstrate that the particular behaviour it engenders is not only perceptible but significant to programmers who use the OS. Something isn't part of an OS by the mere fact of being contained within it anymore than the undigested food I just ate is part of my body (or viruses, or bacteria).
Not everything that "comes on the CD" is part of the OS. Games are not part of the OS. Like I already explained, there is a core that's the OS and beyond that core there is a halo and beyond the halo there's orbiting systems which create their own micro-environments. The really tricky thing is that just like there's a fuzzy upper boundary to the OS, so there is a fuzzy lower boundary to it. So for example, Unix is an OS, but Mach may or may not be, and Mach is never a part of Unix even in a Unix over Mach system. The same way that Unix is never a part of the Smalltalk OS.
Look, the concept of OS is very fuzzy, extremely fuzzy, but this fuzzy concept is at least useful. The other definitions of OS I've seen are not at all useful. You claim that a scheduler is an intrinsic part of an OS? Well, prove it. It would be entirely possible to write a Smalltalk that turned over all scheduling to the native OS. In that case, technically, the Smalltalk OS wouldn't even have a scheduler, it would just be using someone else's scheduler. There are real consequences to not having a scheduler as part of the OS, bad ones, but it's entirely feasible. And note that what determines whether a scheduler is or is not part of the OS has nothing to do with what kind of algorithm it uses, but entirely secondary stuff like whether it's accessible in the same way as everything else in the OS (whether you can reflect on it if your OS is reflexive, whether the scheduler is OO if your OS is OO, and so on). It's this secondary stuff that's of import to an OS designer, the particular algorithm is up to the technician / writer / engineer, to whom I'll say 'make the best decision you know how because I wash my hands of it'.
You want to fight about which definition is right? Fine by me. But I'd prefer you actually do it, actually fight for the definition rather than just complain that I'm not respecting your definition. The key criterion that differentiates different OSes is this; ease of porting. If you only need to recompile your code in order to run it on a different platform, then this different platform is the same OS. If you need to actually port it, then it starts to be a different OS. If you have to work hard at porting it, because the concepts of the OS are entirely different, then it's a different OS family. It's the same as species in biology; conceptual descent and ease of DNA sharing. So the different Smalltalk dialects are much the same language, but as operating systems they are substantially different though they all fall within the same family.
How am I supposed to be open-minded to your definitions, when your definitions deny that I know anything at all?
That's not what it's all about.
First of all, even if you hew to the traditional definition of OS, it doesn't mean you don't know anything about the subject defined by my definition. It just means that not everything you know and not everything you did falls in the category. There's a whole bunch of overlap between the two and I doubt if you fall in the category of OS writer, it's just that much of the stuff you did as an OS writer had nothing to do with design per se, only some of it was design. I'm not denying your expertise, I'm just not acknowledging the relevance of all of your expertise.
Second, definitions are a technical matter and this particular definition is entirely technical. Given that Smalltalk is an OS, and this is widely recognized, and given that the Smalltalk OS development model is superior to the traditional OS development model, it follows that certain concerns of traditional OS developers are revealed as entirely parochial. If you can reuse a native scheduler and not write one of your own, then the scheduler simply isn't part of the OS.
Do I lose credibility by saying this? What does it matter? I'm sure I lose credibility by saying that Roger Penrose is not a scientist, but this is a matter of fact not opinion. -- RK
Delete When Read Doug, do you know anything about the work of Burkhard Heim? See the item at the end of my home page. Comment welcome. -- John Fletcher
Wow! Thanks for running my attempts at Fortrash and Pascal. I'm glad they worked. -- Elizabeth Wiethoff
I'm impressed that you have Fortran, Pascal, and Basic on hand. Eeks! What a geek! -- Eliz
Yes, I am, but not based on that evidence. Fortran and Pascal are just part of the usual development environment on Linux.
"What about Basic?" I'm a compiler guy. I have, looks like, 3 implementations of Basic, 5 to 7 of Lisp (not counting several I did myself), 3 of Scheme, one of Haskell, etc, etc, etc...The languages area of my disk currently is soaking up...lessee...2.3 gigabytes.
Snobol, Icon, Forth, Prolog, many others, including of course, a tiny Cobol.
So "I'm a geek" follows from "I'm a compiler guy", not from having obsolete compilers for no good reason. I have an excuse. :-) -- Doug
Doug, I've seen Aleksey's stuff. I exercised restraint. It's the simpler course. -- Anon
Thank you, Anon. You are of course correct. But note that I was still being patient right up to the point where he started being needlesssly offensive the broader topic of wiki etc.; being vulgar and rude is, of course, merely a reflection on my personal failings, but I would hardly describe it as uncalled for, given the circumstances. Nonetheless, thank you for reminding me; I will work harder on my personal failings in such circumstances. British dry wit might have done spectacularly better, as one example. -- Doug
Sorry, that I accused you of rudeness without knowing the context. I will pay more attention to the discussion or keep still in the future. -- Gunnar Zarncke
Thank you, I appreciate your good will on that. -- Doug Merritt
There's a number of factors that have made backyards very popular in American culture. The first of these are ever longer working hours leading to less quality time with children, or hell any time with children. The second is a car dominated culture which has made streets massively unsafe for kids. I wouldn't be surprised if cars outweigh all other causes of childhood death combined. And the third is a pernicious atomizing of society into its component individuals which has destroyed the social fabric. That same fabric which extended the functional range of parents (eg, shared supervision) and renewed their trust in human beings. This doesn't even touch on instances specifically aimed at screwing with children (eg, causing fear of "the pedophile in our midst") or combinations of the above (eg, bedroom communities -> car-based atomization of individuals). With these fucked up factors, it's not surprising that parents cast favourable glances on fenced-in prisons for children. In a fear-based culture such as the USA, being "safe" is more important than being free. -- RK
Definitely good points.
In general I think I often agree with your points when you mention them as important factors, but may disagree if you cast them as absolutes, because I think that absolutes are rare and notoriously difficult to identify reliably.
On the other hand, I sympathize with the desire to find true absolutes, and I can't help but notice that treating things as absolutes almost always simplifies analysis. Nonetheless... -- Doug Merritt
Of course, this is assuming Europeans aren't as obsessed with backyards as Americans. For all I know, they could be just as bourgeois. Damn, the more I think about it, the more I miss the streets, trees and parks of Paris. -- RK
Yes, Paris is nice. -- Doug
Delete When Read: I've touched No Keening again, just for formatting. I like the new version, I think your touches complemented and completed my original version. Thank you.
Doug, I was the one who originally deleted Bluetail Ab. I don't really see any value in the page (especially the company doesn't exist anymore, and the only content was three broken links). I still don't think it has much value, but I don't see any point in Edit War'ring with (I think) Luke over it. --Tim Lesher
You mean, where I said "I do not have any interest in Edit War'ing this page...would you kindly answer those kinds of questions, and we can make your answer part of the page, and then there's no reason to delete the page, and then everyone's happy."?
Yikes--miscommunication. I was just stating that I wasn't going to try to re-delete it. Sorry for the misunderstanding; Delete When Read. -- Tim Lesher
It took a couple days but I've come up with a couple insights regarding parabarbarians. Enjoy. :) -- RK
Thanks for your note. Seen the conversation. I'm not intending to do any more now, or soon, but I thought the edit I made was worth making regardless.
Doug -- looks like the squeaky wheel gets greased. I brought up your website "observations" once again, and apparently they have greater effect when they come from an outsider. The web designer is being Re Educated. -- Tim Lesher
Hurray! Congratulations! -- Doug
Doug to Donald Noyes:
I think you view my interactions with you largely as an Anti Pattern, but 'tis not so, I am typically aiming at being constructive, whether it seems so or not. I was trying to be constructive in the last interaction I wrote (which you simply deleted). It appears to me that you think I am not trying to be helpful, but from my point of view, at least, that is not the case. I may be critical, but criticism can be intended to be helpful, and that was the case with what I wrote to you most recently. -- Doug
Donald Noyes to Doug:
Not my view at all. Continued interaction of this type is encouraged by me and especially coming from one who seems to be extremely intelligent and well-informed. I read with interest your posts and interactions with others with the result of becoming more informed in areas in which I am but a neophyte and a novice. You should find in my writings evidence that I welcome helpful interaction and especially criticism. I agree and disagree with widely divergent personalities with an approach which assumes that malice is not intended. It would be foolish of me not to learn from interactions with people of difference who I encounter here. I read what you wrote before I deleted it, and since it was directed at me, and I had absorbed its meaning I felt it was a Delete When Read segment which if left would detract from rather than reinforce the message of the page. I agree with Costin that Critics Are Your Best Friends, so I not only welcome, but also encourage helpful criticism.
Hi Doug, you marked my page FebruaryZeroSix. May I ask why? It neither seems to be topical, not are home pages usually listed among topical pages (see Implicit Topics). -- Gunnar Zarncke
It was a joke, motivated by the fact that you're a little over-eager to apply that tag to pages, as with any edit that obscures ongoing conversation by overwriting the last change diff. -- Doug
Hey, Doug... I saw your reference to Lisp In Small Pieces on top's page. I finally decided to check it out, and read the sample (chapter 9) on the author's home page. I'm intrigued by the sample's "Socratic" style - is the whole book written that way? -- Tim Lesher
URL please? I see no sample whatsoever, although I do see a summary (not sample) of chapter 8 -- and it is not Socratic. -- Doug
Hmm. Looks like the link I followed was to a sample chapter of The Little Schemer, not Lisp In Small Pieces. -- tl
Doug, sorry for taking so long to reply to your question about receptor mutations -- my answer's on my home page. --Keith Mann
Thanks for your answer; that's kind of what I was afraid of. -- Doug
I don't know why you're surprised, you know much, much more biology than I do and functional redundancy is a basic idea. From www.talkorigins.org
Cytochrome c is an extremely functionally redundant protein, because many dissimilar sequences all form cytochrome c electron transport proteins. Functional redundancy need not be exact in terms of performance; some functional cytochrome c sequences may be slightly better at electron transport than others.
Decades of biochemical evidence have shown that many amino acid mutations, especially of surface residues, have only small effects on protein function and on protein structure (Branden and Tooze 1999, Ch. 3; Harris et al. 1956; Lesk 2001, Chs. 5 and 6, pp. 165-228; Li 1997, p. 2; Matthews 1996). A striking example is that of the c-type cytochromes from various bacteria, which have virtually no sequence similarity. Nevertheless, they all fold into the same three-dimensional structure, and they all perform the same biological role (Moore and Pettigrew 1990, pp. 161-223; Ptitsyn 1998).
Even within species, most amino acid mutations are functionally silent. For example, there are at least 250 different amino acid mutations known in human hemoglobin, carried by more than 3% of the world's population, that have no clinical manifestation in either heterozygotic or homozygotic individuals (Bunn and Forget 1986; Voet and Voet 1995, p. 235). The phenomenon of protein functional redundancy is very general, and is observed in all known proteins and genes.
[Now if you ever figure out how > 750% (250 * > 3%) makes sense ....]
With this in mind, consider again the molecular sequences of cytochrome c. Cytochrome c is absolutely essential for life - organisms that lack it cannot live. It has been shown that the human cytochrome c protein works in yeast (a unicellular organism) that has had its own native cytochrome c gene deleted, even though yeast cytochrome c differs from human cytochrome c over 40% of the protein (Tanaka et. al 1988a; Tanaka et al. 1988b; Wallace and Tanaka 1994). In fact, the cytochrome c genes from tuna (fish), pigeon (bird), horse (mammal), Drosophila fly (insect), and rat (mammal) all function in yeast that lack their own native yeast cytochrome c (Clements et al. 1989; Hickey et al. 1991; Koshy et al. 1992; Scarpulla and Nye 1986). Furthermore, extensive genetic analysis of cytochrome c has demonstrated that the majority of the protein sequence is unnecessary for its function in vivo (Hampsey et al. 1986; Hampsey et al. 1988). Only about a third of the 100 amino acids in cytochrome c are necessary to specify its function. Most of the amino acids in cytochrome c are hypervariable (i.e. they can be replaced by a large number of functionally similar amino acids) (Dickerson and Timkovich 1975). Importantly, Hubert Yockey has done a careful study in which he calculated that there are a minimum of 2.3 x 10^93 possible functional cytochrome c protein sequences, based on these genetic mutational analyses (Hampsey et al. 1986; Hampsey et al. 1988; Yockey 1992, Ch. 6, p. 254).
Brilliant, brilliant example, thanks for quoting that. My interpretation is that, precisely because cytochrome c is so essential, that is why there are so very many viable forms of its genetic encoding, and/or, even more likely, it became essential to all living things very early because it was so viably "protean"; in the earliest history of life, brittle/fragile sequences that easily became non-viable from a SNP (Single Nucleotide Polymorphism) would have been selected against. I would in fact expect that the very earliest, pre-Kingdom-differentiation proteins (and their genetic encodings) would tend to have this kind of resiliency against transcription error.
Important functionality that evolved billions of years later, however, in some cases is demonstrably susceptible to SNPs; many human diseases have been found to be the result of SNPs.
Back to the original topic, cellular receptors for external (let's say, for now) messenger molecules existed far, far, earlier than the origin of multicellular life, so some of them are presumably robust against transcription errors. Others, however, became specialized very recently, evolutionarily, and those would be less robust.
We know, even from popular headlines, that neuronal cell receptors for moderately old messengers such as serotonin, turn out to have many, many different receptor types, preferentially clustering one way or another on different kinds of neurons (and indeed, non-neurons), in different parts of the brain, primarily, but very importantly, in the gut, as well, and tertiarily, in other cells, including skin cells, throughout the body -- and we know very little about how many kinds of serotonin receptors there are, or why some of each kind cluster on one kind of neuron (or other cell) in one part of the brain (or elsewhere in body), but less so or not at all elsewhere in brain (or elsewhere in body)...and that's just one messenger, out of hundreds of identified messengers (of which neurotransmitters are a rough subset, but essentially all neurotransmitters are messengers for non-neurones as well), with certainty that there are at least thousands, and possibly vastly more than that....
The subject is inconceivably complex, and makes our current Mc Cullough & Pitts artificial neural network model look like roman numerals compared with modern abstract math (which, btw, M&P understood, although their current day neural net fans nearly universally do not).
, I'll stop there, out of too much typing for now.... -- Doug
...
STOP THE PRESSES, I forgot the important point -- it's not just about redundant backup of functionality, not at all! Consider Prozac (and skip the controversial issues, for the moment), and let's pretend that it does nothing more nor less than what it seems to promise: SSRI, Selective Serotonin Reuptake Inhibitor. If Prozac inhibited reuptake of serotonin in every synapse/cell receptor in the body that produced or sensed serotonin, it would certainly kill 100% of people who took it.
The key point is that it is "selective"; it only affects certain kinds of serotonin receptors, and the kind that it affects, appear only in certain kinds of cells (including but not limited to neurons) in certain parts of the brain and body, and for whatever reason, the receptors it affects on the cells that have those receptors, happen to be such that the overall effect sometimes has an effect on depression -- but also affects receptors on cells in e.g. the gut and related areas, which is why Prozac and similar drugs can cause digestive and sexual issues, too.
Different cells have differing types and numbers of receptors for the same messenger for the purpose of fine-tuning the response of different cell types in different anatomical areas; this kind of fine tuning has been on-going throughout evolution of all species, and some things, in some species (e.g. human neurons) are more fine-tuned than others...and/or have a wider variety of fine-tuning experiments on-going in the gene pool. Thus the idiosyncratic reaction of individuals to Prozac and many other drugs that are intended to be finely-targeted. They are still only grossly-targeted, given the individual variations in the gene pool. This tends to be less true for the oldest areas of functionality, e.g. the globins, including haemoglobin, which were fine tuned (e.g. for infant globin versus adult globin) a comparatively long time ago, evolutionarily.
So it's very, very, very important to understand that it's not just about redundancy; the genes and proteins and receptors for things that are more newly evolved tend to be far more subtle in fine effect, not purely redundant in a different form, whereas the very oldest genes/proteins/enzymes/receptors tend to be more robust against tiny changes.
-- Doug
I can't offer the URL right now (due to sad system problems), but there is in fact a collaborative database collecting information from research world-wide, as to which receptor variants appear on which kinds of cells in which gross areas of the brain. It's an important start, but we still know comparatively little. -- Doug
Doug, I very much appreciate your comments about John, which I think refer to me, on Ruby Instead Of Smalltalk. On this occasion there is some confusion, as I had edited the page, but only to provide some extra links at the bottom of the page. I have been relucant to get into arguments on wiki, and contributed Alternative Not Exclusive only to try to calm things down. In this I did not succeed at first. If you look at that page that you can see the outcome. If you did mean me, thanks as it means a lot. -- John Fletcher
Here's a thornbush of a question for you. I googled my name after EH suggested it and found a couple of cites to "stuff I had written" on Why Wiki Works Not. I can't find a single word on that page that's mine. On the other hand, that page got refactored by moi into How Wiki Works and Wiki As Anarchy. And I'm 90% certain that everything in How Wiki Works up to "Some observations" is actually by me. I consider "the idea is so stupid as to be reprehensible" and the mention of wikipedia's sharply defined cliques which I got burned with to be giveaways. But the way it's written it actually looks like it's by Andy. How would you resolve this? I'm kinda disappointed that the one time someone bothers to cite me in a semi-serious paper, I can't even find what I wrote that they're citing. -- RK
Doug, Thank you for your comments. I am glad you enjoyed the History. I think that someone needs to write the sequel. My copy of Geometrical Vectors has now arrived. Very interesting. -- John
Cool. I've been trying to integrate it with traditional differential forms approaches, but I think I need a better text on the latter than I happen to have on hand; any suggestions? -- Doug
Head on over to Bubble Sort Challenge, if you like. I've posted an amazing Perl Language Bubble Sort there. It uses Regular Expressions instead of arrays. -- Elizabeth Wiethoff
Fun, good for you! -- Doug
Doug, Greetings. I have just seen a comment of yours on the Transpose Function, I think from a couple of years back. Are you aware of the work Functional Pattern System For Object Oriented Design? In my usual way, I am getting to understand it by implementing the examples, currently in Eiffel Language, see Object Functional Implementation. As you might guess, more tools for mathematics. -- John
I haven't read it, thanks for pointing it out. -- Doug
Thank You. Um, there's been some interesting activity in the past couple weeks on Capabilities Management, Complex Event Processing, Enterprise Service Bus, Portal Software, Complexity Management, Stu Charlton, Cultural Change, Management Road Map, Leader Ship, Change Your Organization Tactics, Strategic Planning, Change Management, Business Process Management, Does It Matter, The Fifth Discipline, Getting To Yes, Conflicting Requirements, Wiki Mind Wipe Reality Check, Slow Down To Speed Up, When Is Xp Not Appropriate, Prisoners Dilemma, Culture Is The Manifestation Of Leadership. Any advice? -- Eliz (Delete When Cooked. Don't mind me; I'm just cooking a bit: 19-10-2006)
Hi there. Well, the appropriate gnoming could be confusing to, and misunderstood by, bystanders. How about what I just tried with Capabilities Management? Move it to his home page, let time pass, then remove from home page. I had the thought that referring to it as his school notes should help in explaining the motivation without long explanation, and is more or less accurate figuratively whether he is literally in school or not. -- Doug
Thanks for the advice. I diffed each page and moved the green stuff to David Liu. That way I could be certain I was moving post-ban material and not pre-ban material. My concern has been enforcing the darn bans around here, not whether DL's material is actually interesting & useful to others. Let's let this sit a few more days before I consider it cooked. -- Eliz
See original on c2.com