Which Representation?
Quote from Paul Vernon on December 3, 2021, 11:21 amThe manifesto says
Value vs. Appearance
The third and last logical difference ... is the one ... between a value per se and an appearance of that value in some particular context.
... there is exactly one integer 12 “in the universe,” as it were
... there is a logical difference between an appearance of a value, on the one hand, and the internal encoding or physical representation of that appearance
Which, to be quite frank, is incoherent (and not just due to my abridgement above).
Unless I am very much mistaken, Hugh and Chris appear to be claiming a logical difference based solely on physical differences (i.e. differences of encoding or physical representation.)
A logical difference has to be a difference at a logical level. Right?
Nobody denies that there are different ways of physically encoding values in machine representations. But so what?
OK, so to be more charitable, really Chris and Hugh are saying that a single value can have multiple different appearances. They state
Note carefully too that appearance of a value is a model concept
So, to make my own example,
+12
and12
could both be appearances (in the model, visible to users) of the same value - the exactly one integer 12 “in the universe,”.But where is the justification for that position? Arguing up from physical encoding does not justify the position that "a single value can have multiple different appearances". Well, it maybe justifies "could have", but it does not justify the position that we have to allow it. It does not justify turning a could into can.
And we don't need to look to circles for more discussion. We could allow
+12
and12
and012
and12.0
and12.
and1100b
and¹²⁄₁
and⁻¹²⁄₋₁
and²⁴⁄₂
and12+0i
(etc) to be appearances of the number 12, but why? I see no justification to allow such lack of simplicity.
The manifesto says
Value vs. Appearance
The third and last logical difference ... is the one ... between a value per se and an appearance of that value in some particular context.
... there is exactly one integer 12 “in the universe,” as it were
... there is a logical difference between an appearance of a value, on the one hand, and the internal encoding or physical representation of that appearance
Which, to be quite frank, is incoherent (and not just due to my abridgement above).
Unless I am very much mistaken, Hugh and Chris appear to be claiming a logical difference based solely on physical differences (i.e. differences of encoding or physical representation.)
A logical difference has to be a difference at a logical level. Right?
Nobody denies that there are different ways of physically encoding values in machine representations. But so what?
OK, so to be more charitable, really Chris and Hugh are saying that a single value can have multiple different appearances. They state
Note carefully too that appearance of a value is a model concept
So, to make my own example, +12
and 12
could both be appearances (in the model, visible to users) of the same value - the exactly one integer 12 “in the universe,”.
But where is the justification for that position? Arguing up from physical encoding does not justify the position that "a single value can have multiple different appearances". Well, it maybe justifies "could have", but it does not justify the position that we have to allow it. It does not justify turning a could into can.
And we don't need to look to circles for more discussion. We could allow +12
and 12
and 012
and 12.0
and 12.
and 1100b
and ¹²⁄₁
and ⁻¹²⁄₋₁
and ²⁴⁄₂
and 12+0i
(etc) to be appearances of the number 12, but why? I see no justification to allow such lack of simplicity.
Quote from Dave Voorhis on December 3, 2021, 12:49 pmQuote from Paul Vernon on December 3, 2021, 11:21 amThe manifesto says
Value vs. Appearance
The third and last logical difference ... is the one ... between a value per se and an appearance of that value in some particular context.
... there is exactly one integer 12 “in the universe,” as it were
... there is a logical difference between an appearance of a value, on the one hand, and the internal encoding or physical representation of that appearance
Which, to be quite frank, is incoherent (and not just due to my abridgement above).
Unless I am very much mistaken, Hugh and Chris appear to be claiming a logical difference based solely on physical differences (i.e. differences of encoding or physical representation.)
A logical difference has to be a difference at a logical level. Right?
Nobody denies that there are different ways of physically encoding values in machine representations. But so what?
OK, so to be more charitable, really Chris and Hugh are saying that a single value can have multiple different appearances. They state
Note carefully too that appearance of a value is a model concept
So, to make my own example,
+12
and12
could both be appearances (in the model, visible to users) of the same value - the exactly one integer 12 “in the universe,”.But where is the justification for that position? Arguing up from physical encoding does not justify the position that "a single value can have multiple different appearances". Well, it maybe justifies "could have", but it does not justify the position that we have to allow it. It does not justify turning a could into can.
And we don't need to look to circles for more discussion. We could allow
+12
and12
and012
and12.0
and12.
and1100b
and¹²⁄₁
and⁻¹²⁄₋₁
and²⁴⁄₂
and12+0i
(etc) to be appearances of the number 12, but why? I see no justification to allow such lack of simplicity.I don't think they meant appearance as in how it looks, but as an occurrence of. I.e., there might be multiple occurrences of a given value. E.g., in the expression 2 + 2, the value 2 appears twice.
But it's been a while since I read that part, I happily admit I may have misinterpreted now and not then (or now and then), and I generally don't pay much attention to philosophical considerations. What matters -- at least in terms of ultimately producing computational implementations -- are language (whether implementation or model) syntax, semantics, and underlying mathematics. The philosophy that underpins these may be superficially interesting, but rarely results in anything except verbiage and misinterpretation.
Ultimately, it's only implemented computer language semantics that matter.
So, short answer: I wouldn't worry about it. What matters is...
What does your language do?
Quote from Paul Vernon on December 3, 2021, 11:21 amThe manifesto says
Value vs. Appearance
The third and last logical difference ... is the one ... between a value per se and an appearance of that value in some particular context.
... there is exactly one integer 12 “in the universe,” as it were
... there is a logical difference between an appearance of a value, on the one hand, and the internal encoding or physical representation of that appearance
Which, to be quite frank, is incoherent (and not just due to my abridgement above).
Unless I am very much mistaken, Hugh and Chris appear to be claiming a logical difference based solely on physical differences (i.e. differences of encoding or physical representation.)
A logical difference has to be a difference at a logical level. Right?
Nobody denies that there are different ways of physically encoding values in machine representations. But so what?
OK, so to be more charitable, really Chris and Hugh are saying that a single value can have multiple different appearances. They state
Note carefully too that appearance of a value is a model concept
So, to make my own example,
+12
and12
could both be appearances (in the model, visible to users) of the same value - the exactly one integer 12 “in the universe,”.But where is the justification for that position? Arguing up from physical encoding does not justify the position that "a single value can have multiple different appearances". Well, it maybe justifies "could have", but it does not justify the position that we have to allow it. It does not justify turning a could into can.
And we don't need to look to circles for more discussion. We could allow
+12
and12
and012
and12.0
and12.
and1100b
and¹²⁄₁
and⁻¹²⁄₋₁
and²⁴⁄₂
and12+0i
(etc) to be appearances of the number 12, but why? I see no justification to allow such lack of simplicity.
I don't think they meant appearance as in how it looks, but as an occurrence of. I.e., there might be multiple occurrences of a given value. E.g., in the expression 2 + 2, the value 2 appears twice.
But it's been a while since I read that part, I happily admit I may have misinterpreted now and not then (or now and then), and I generally don't pay much attention to philosophical considerations. What matters -- at least in terms of ultimately producing computational implementations -- are language (whether implementation or model) syntax, semantics, and underlying mathematics. The philosophy that underpins these may be superficially interesting, but rarely results in anything except verbiage and misinterpretation.
Ultimately, it's only implemented computer language semantics that matter.
So, short answer: I wouldn't worry about it. What matters is...
What does your language do?
Quote from Paul Vernon on December 3, 2021, 2:02 pmQuote from Dave Voorhis on December 3, 2021, 12:49 pmUltimately, it's only implemented computer language semantics that matter.
So, short answer: I wouldn't worry about it. What matters is...
What does your language do?
OK. Yes, I'm not sure I do worry about it. Well, I would want any language of mine to be logically coherent and as simple as possible. A language built on solid foundations based on clear stated principles - whether mathematical, philosophical or indeed practical in nature. So, one reason to post on the forum is to see if my tentative principles (such as equating representation/appearance and value) have any obvious (or not so obvious) flaws. I can worry less when I see no fatal arguments against.
I think I used to think that all that was needed was a new version of the manifesto. One somewhat simpler (so without multiple possible representations to pick one example) and more concrete (to pick another example: say prescribe/design some reasonable minimal set of values - so not just the boolean values, but certainly integers, rationals, constructible reals (maybe), intervals, SI units, currency etc). One somehow even more compelling. Then once written, implementations would come.
Now I see two things. One is that the act of building it is essential to creation of the model. To assume a waterfall approach - with carefully picked, not to be crossed, demarcation lines between "model" and "implementation" matters - is to put you faith in almost divine design. It would take a true genius to conceive completely a perfect model on paper. Iteration thru loops of building and trying is the human (and, nowadays the AI) way.
The second thing is that well, coding is easier* nowadays. There is little excuse for not - at the very least - prototyping. If you are going to the trouble of using formal specifications - creating a BNF say - it is a bit uppity (?) to refuse to pass the spec thru a parser generator. If you formally specify your operator semantics, then it should be a very short step to actually execute those specifications in some computer language. Bring the two together and hey presto, you have a prototype sufficient to validate many parts of your model.
Of course there is a big difference between a prototype and a sellable implementation. By then again, a prototype is going to interest a start-up investor much more than some dusty document... (and a prototype backed by principles and a model in a dusty document even better right?)
So yes, what does my language do? . Good question, but before that, the important point is the assumption (to which I am agreeing) that you actually have to have a (demonstrable) language in the first place. I'm not saying Chris and Hugh were wrong not to team up with coders (or code themselves!) from day one... but nowadays, that would be the obvious way to go.
[ * by easier, I mean there are more (and better) languages, libraries, IDEs, programmers etc. Not to mention cheaper/more flexible infrastructure and many other things ]
Quote from Dave Voorhis on December 3, 2021, 12:49 pmUltimately, it's only implemented computer language semantics that matter.
So, short answer: I wouldn't worry about it. What matters is...
What does your language do?
OK. Yes, I'm not sure I do worry about it. Well, I would want any language of mine to be logically coherent and as simple as possible. A language built on solid foundations based on clear stated principles - whether mathematical, philosophical or indeed practical in nature. So, one reason to post on the forum is to see if my tentative principles (such as equating representation/appearance and value) have any obvious (or not so obvious) flaws. I can worry less when I see no fatal arguments against.
I think I used to think that all that was needed was a new version of the manifesto. One somewhat simpler (so without multiple possible representations to pick one example) and more concrete (to pick another example: say prescribe/design some reasonable minimal set of values - so not just the boolean values, but certainly integers, rationals, constructible reals (maybe), intervals, SI units, currency etc). One somehow even more compelling. Then once written, implementations would come.
Now I see two things. One is that the act of building it is essential to creation of the model. To assume a waterfall approach - with carefully picked, not to be crossed, demarcation lines between "model" and "implementation" matters - is to put you faith in almost divine design. It would take a true genius to conceive completely a perfect model on paper. Iteration thru loops of building and trying is the human (and, nowadays the AI) way.
The second thing is that well, coding is easier* nowadays. There is little excuse for not - at the very least - prototyping. If you are going to the trouble of using formal specifications - creating a BNF say - it is a bit uppity (?) to refuse to pass the spec thru a parser generator. If you formally specify your operator semantics, then it should be a very short step to actually execute those specifications in some computer language. Bring the two together and hey presto, you have a prototype sufficient to validate many parts of your model.
Of course there is a big difference between a prototype and a sellable implementation. By then again, a prototype is going to interest a start-up investor much more than some dusty document... (and a prototype backed by principles and a model in a dusty document even better right?)
So yes, what does my language do? . Good question, but before that, the important point is the assumption (to which I am agreeing) that you actually have to have a (demonstrable) language in the first place. I'm not saying Chris and Hugh were wrong not to team up with coders (or code themselves!) from day one... but nowadays, that would be the obvious way to go.
[ * by easier, I mean there are more (and better) languages, libraries, IDEs, programmers etc. Not to mention cheaper/more flexible infrastructure and many other things ]
Quote from Dave Voorhis on December 3, 2021, 5:36 pmQuote from Paul Vernon on December 3, 2021, 2:02 pmQuote from Dave Voorhis on December 3, 2021, 12:49 pmUltimately, it's only implemented computer language semantics that matter.
So, short answer: I wouldn't worry about it. What matters is...
What does your language do?
OK. Yes, I'm not sure I do worry about it. Well, I would want any language of mine to be logically coherent and as simple as possible. A language built on solid foundations based on clear stated principles - whether mathematical, philosophical or indeed practical in nature. So, one reason to post on the forum is to see if my tentative principles (such as equating representation/appearance and value) have any obvious (or not so obvious) flaws. I can worry less when I see no fatal arguments against.
Without knowing anything about your intended language, it's hard to say.
Sometimes your use of terminology suggests at least one of us doesn't know what you're doing (:-)), but if you're designing a model or framework for its own sake, or even broadly as guidelines for designing languages -- which is what TTM is -- then there's a certain freedom (though not unlimited freedom) to use or define terminology as you see fit.
But if you're talking about an implementation of a specific computer language, I expect to see terms like literal, statement, expression, value, type, atom, keyword, function, procedure, parameter, argument, operator, variable, constant, syntax, semantics, etc., used in fairly rigorous conventional (per computer science, unless identified otherwise) ways along with clarifications or specifications where helpful. E.g., like clarifying whether "function" is used in the mathematical sense, or the computational sense as "a procedure that returns a value," assuming it's relevant to make such a clarification.
Some terms may not apply, others may need to be included, whilst many are almost unavoidably connected. E.g., you can't really talk about expressions without talking about values, and you usually can't talk about values without talking about types and literals, even if only to wave them away (somehow.)
Notably, representation of a value tends to refer to internal, implementation-specific concerns -- irrelevant to language syntax or semantics -- unless representation semantics are exposed in some fashion. How literals denote values is (usually) a separate issue of representation.
And so on.
I think I used to think that all that was needed was a new version of the manifesto. One somewhat simpler (so without multiple possible representations to pick one example) and more concrete (to pick another example: say prescribe/design some reasonable minimal set of values - so not just the boolean values, but certainly integers, rationals, constructible reals (maybe), intervals, SI units, currency etc). One somehow even more compelling. Then once written, implementations would come.
Now I see two things. One is that the act of building it is essential to creation of the model. To assume a waterfall approach - with carefully picked, not to be crossed, demarcation lines between "model" and "implementation" matters - is to put you faith in almost divine design. It would take a true genius to conceive completely a perfect model on paper. Iteration thru loops of building and trying is the human (and, nowadays the AI) way.
The second thing is that well, coding is easier* nowadays. There is little excuse for not - at the very least - prototyping. If you are going to the trouble of using formal specifications - creating a BNF say - it is a bit uppity (?) to refuse to pass the spec thru a parser generator. If you formally specify your operator semantics, then it should be a very short step to actually execute those specifications in some computer language. Bring the two together and hey presto, you have a prototype sufficient to validate many parts of your model.
Of course there is a big difference between a prototype and a sellable implementation. By then again, a prototype is going to interest a start-up investor much more than some dusty document... (and a prototype backed by principles and a model in a dusty document even better right?)
So yes, what does my language do? . Good question, but before that, the important point is the assumption (to which I am agreeing) that you actually have to have a (demonstrable) language in the first place. I'm not saying Chris and Hugh were wrong not to team up with coders (or code themselves!) from day one... but nowadays, that would be the obvious way to go.
Maybe. It's notable that TTM does not specify a language but provides a conceptual framework for a family of languages, of which Tutorial D was only intended as a paper illustration of certain language semantics.
In short, TTM is a guide for language designers, not a language design.
But my point was not so much about your language in particular -- though I've (somewhat) addressed that above -- but about philosophy vs (is it "vs", or is it "of"?) computer languages in general: again, the only thing that really matters is what any language does, i.e., its syntax and semantics.
Quote from Paul Vernon on December 3, 2021, 2:02 pmQuote from Dave Voorhis on December 3, 2021, 12:49 pmUltimately, it's only implemented computer language semantics that matter.
So, short answer: I wouldn't worry about it. What matters is...
What does your language do?
OK. Yes, I'm not sure I do worry about it. Well, I would want any language of mine to be logically coherent and as simple as possible. A language built on solid foundations based on clear stated principles - whether mathematical, philosophical or indeed practical in nature. So, one reason to post on the forum is to see if my tentative principles (such as equating representation/appearance and value) have any obvious (or not so obvious) flaws. I can worry less when I see no fatal arguments against.
Without knowing anything about your intended language, it's hard to say.
Sometimes your use of terminology suggests at least one of us doesn't know what you're doing (:-)), but if you're designing a model or framework for its own sake, or even broadly as guidelines for designing languages -- which is what TTM is -- then there's a certain freedom (though not unlimited freedom) to use or define terminology as you see fit.
But if you're talking about an implementation of a specific computer language, I expect to see terms like literal, statement, expression, value, type, atom, keyword, function, procedure, parameter, argument, operator, variable, constant, syntax, semantics, etc., used in fairly rigorous conventional (per computer science, unless identified otherwise) ways along with clarifications or specifications where helpful. E.g., like clarifying whether "function" is used in the mathematical sense, or the computational sense as "a procedure that returns a value," assuming it's relevant to make such a clarification.
Some terms may not apply, others may need to be included, whilst many are almost unavoidably connected. E.g., you can't really talk about expressions without talking about values, and you usually can't talk about values without talking about types and literals, even if only to wave them away (somehow.)
Notably, representation of a value tends to refer to internal, implementation-specific concerns -- irrelevant to language syntax or semantics -- unless representation semantics are exposed in some fashion. How literals denote values is (usually) a separate issue of representation.
And so on.
I think I used to think that all that was needed was a new version of the manifesto. One somewhat simpler (so without multiple possible representations to pick one example) and more concrete (to pick another example: say prescribe/design some reasonable minimal set of values - so not just the boolean values, but certainly integers, rationals, constructible reals (maybe), intervals, SI units, currency etc). One somehow even more compelling. Then once written, implementations would come.
Now I see two things. One is that the act of building it is essential to creation of the model. To assume a waterfall approach - with carefully picked, not to be crossed, demarcation lines between "model" and "implementation" matters - is to put you faith in almost divine design. It would take a true genius to conceive completely a perfect model on paper. Iteration thru loops of building and trying is the human (and, nowadays the AI) way.
The second thing is that well, coding is easier* nowadays. There is little excuse for not - at the very least - prototyping. If you are going to the trouble of using formal specifications - creating a BNF say - it is a bit uppity (?) to refuse to pass the spec thru a parser generator. If you formally specify your operator semantics, then it should be a very short step to actually execute those specifications in some computer language. Bring the two together and hey presto, you have a prototype sufficient to validate many parts of your model.
Of course there is a big difference between a prototype and a sellable implementation. By then again, a prototype is going to interest a start-up investor much more than some dusty document... (and a prototype backed by principles and a model in a dusty document even better right?)
So yes, what does my language do? . Good question, but before that, the important point is the assumption (to which I am agreeing) that you actually have to have a (demonstrable) language in the first place. I'm not saying Chris and Hugh were wrong not to team up with coders (or code themselves!) from day one... but nowadays, that would be the obvious way to go.
Maybe. It's notable that TTM does not specify a language but provides a conceptual framework for a family of languages, of which Tutorial D was only intended as a paper illustration of certain language semantics.
In short, TTM is a guide for language designers, not a language design.
But my point was not so much about your language in particular -- though I've (somewhat) addressed that above -- but about philosophy vs (is it "vs", or is it "of"?) computer languages in general: again, the only thing that really matters is what any language does, i.e., its syntax and semantics.
Quote from Erwin on December 3, 2021, 6:12 pmAnd to expand on that : the guide is such that it lays out the principles that must be adhered to both by a language that goes "EQ(A,B)" as well as one that goes the usual "A = B". It is impossible to write something for a parser generator that processes "any possible option" usefully, and it is impossible for parsers to check conformance of the (grammar for the) parsed language with the principles.
The BNF grammar you find in the TTM literature is merely for the ***example*** language. The real core is the principles.
And to expand on that : the guide is such that it lays out the principles that must be adhered to both by a language that goes "EQ(A,B)" as well as one that goes the usual "A = B". It is impossible to write something for a parser generator that processes "any possible option" usefully, and it is impossible for parsers to check conformance of the (grammar for the) parsed language with the principles.
The BNF grammar you find in the TTM literature is merely for the ***example*** language. The real core is the principles.
Quote from Paul Vernon on December 3, 2021, 10:59 pmQuote from Dave Voorhis on December 3, 2021, 5:36 pmSometimes your use of terminology suggests at least one of us doesn't know what you're doing (:-)), but if you're designing a model or framework for its own sake, or even broadly as guidelines for designing languages -- which is what TTM is -- then there's a certain freedom (though not unlimited freedom) to use or define terminology as you see fit.
Thank you Dave. I certainly desire to be more accurate in my use of terminology - your feedback helps.
Notably, representation of a value tends to refer to internal, implementation-specific concerns -- irrelevant to language syntax or semantics -- unless representation semantics are exposed in some fashion. How literals denote values is (usually) a separate issue of representation.
OK.
I guess I've been using "representation" when using "literal" might have been clearer. I would typically say "encoding" maybe, or "physical representation" when referring to internal, implementation-specific concerns, but that might be untypical.
Maybe. It's notable that TTM does not specify a language but provides a conceptual framework for a family of languages, of which Tutorial D was only intended as a paper illustration of certain language semantics.
Yes. That is a point I have heard more than once. I suspect I've never really found it a significant one. For me TTM is a specification of a model - a version of the Relational Model of Data. Part of that model is a specification of an algebra of (some minimal set of) operators. Part of the model is principles such as "the Information Principe" etc and other things too. The fact that you could build different languages that conform to the model is - I guess - sort of obvious (to me anyway), and so I'm not sure quite how notable it is.
My other thought here is if Chris and Hugh were too ambitious, or not ambitious enough (or just right) in the structure/goals of TTM? I don't think I can answer that, but I certainly think there is a lot to be said of having a concrete language to feedback into the framework. Tutorial D - I'm sure - has played some of that role, and I'm sure Rel has for it's part too.
In short, TTM is a guide for language designers, not a language design.
But my point was not so much about your language in particular -- though I've (somewhat) addressed that above -- but about philosophy vs (is it "vs", or is it "of"?) computer languages in general: again, the only thing that really matters is what any language does, i.e., its syntax and semantics.
Yes. Syntax and semantics. But then semantics in the broad sense is all about meaning, and it is (I think) unwise to detach philosophy or indeed "reality" from it.
Quote from Dave Voorhis on December 3, 2021, 5:36 pmSometimes your use of terminology suggests at least one of us doesn't know what you're doing (:-)), but if you're designing a model or framework for its own sake, or even broadly as guidelines for designing languages -- which is what TTM is -- then there's a certain freedom (though not unlimited freedom) to use or define terminology as you see fit.
Thank you Dave. I certainly desire to be more accurate in my use of terminology - your feedback helps.
Notably, representation of a value tends to refer to internal, implementation-specific concerns -- irrelevant to language syntax or semantics -- unless representation semantics are exposed in some fashion. How literals denote values is (usually) a separate issue of representation.
OK.
I guess I've been using "representation" when using "literal" might have been clearer. I would typically say "encoding" maybe, or "physical representation" when referring to internal, implementation-specific concerns, but that might be untypical.
Maybe. It's notable that TTM does not specify a language but provides a conceptual framework for a family of languages, of which Tutorial D was only intended as a paper illustration of certain language semantics.
Yes. That is a point I have heard more than once. I suspect I've never really found it a significant one. For me TTM is a specification of a model - a version of the Relational Model of Data. Part of that model is a specification of an algebra of (some minimal set of) operators. Part of the model is principles such as "the Information Principe" etc and other things too. The fact that you could build different languages that conform to the model is - I guess - sort of obvious (to me anyway), and so I'm not sure quite how notable it is.
My other thought here is if Chris and Hugh were too ambitious, or not ambitious enough (or just right) in the structure/goals of TTM? I don't think I can answer that, but I certainly think there is a lot to be said of having a concrete language to feedback into the framework. Tutorial D - I'm sure - has played some of that role, and I'm sure Rel has for it's part too.
In short, TTM is a guide for language designers, not a language design.
But my point was not so much about your language in particular -- though I've (somewhat) addressed that above -- but about philosophy vs (is it "vs", or is it "of"?) computer languages in general: again, the only thing that really matters is what any language does, i.e., its syntax and semantics.
Yes. Syntax and semantics. But then semantics in the broad sense is all about meaning, and it is (I think) unwise to detach philosophy or indeed "reality" from it.
Quote from Dave Voorhis on December 3, 2021, 11:20 pmQuote from Paul Vernon on December 3, 2021, 10:59 pmQuote from Dave Voorhis on December 3, 2021, 5:36 pmSometimes your use of terminology suggests at least one of us doesn't know what you're doing (:-)), but if you're designing a model or framework for its own sake, or even broadly as guidelines for designing languages -- which is what TTM is -- then there's a certain freedom (though not unlimited freedom) to use or define terminology as you see fit.
Thank you Dave. I certainly desire to be more accurate in my use of terminology - your feedback helps.
Notably, representation of a value tends to refer to internal, implementation-specific concerns -- irrelevant to language syntax or semantics -- unless representation semantics are exposed in some fashion. How literals denote values is (usually) a separate issue of representation.
OK.
I guess I've been using "representation" when using "literal" might have been clearer. I would typically say "encoding" maybe, or "physical representation" when referring to internal, implementation-specific concerns, but that might be untypical.
"Encoding" or "physical representation" is fine for implementation-specific concerns.
Note that a literal also has a physical representation -- a string of UNICODE characters, for example -- that may be implementation-specific or part of the language specification.
Maybe. It's notable that TTM does not specify a language but provides a conceptual framework for a family of languages, of which Tutorial D was only intended as a paper illustration of certain language semantics.
Yes. That is a point I have heard more than once. I suspect I've never really found it a significant one. For me TTM is a specification of a model - a version of the Relational Model of Data. Part of that model is a specification of an algebra of (some minimal set of) operators. Part of the model is principles such as "the Information Principe" etc and other things too. The fact that you could build different languages that conform to the model is - I guess - sort of obvious (to me anyway), and so I'm not sure quite how notable it is.
My other thought here is if Chris and Hugh were too ambitious, or not ambitious enough (or just right) in the structure/goals of TTM? I don't think I can answer that, but I certainly think there is a lot to be said of having a concrete language to feedback into the framework. Tutorial D - I'm sure - has played some of that role, and I'm sure Rel has for it's part too.
In short, TTM is a guide for language designers, not a language design.
But my point was not so much about your language in particular -- though I've (somewhat) addressed that above -- but about philosophy vs (is it "vs", or is it "of"?) computer languages in general: again, the only thing that really matters is what any language does, i.e., its syntax and semantics.
Yes. Syntax and semantics. But then semantics in the broad sense is all about meaning, and it is (I think) unwise to detach philosophy or indeed "reality" from it.
No need, if philosophy or reality or "reality" help you define the semantics (and syntax too).
But ultimately, a language will be judged on its syntax and semantics over its philosophy. If the semantics and syntax make sense, the philosophy (which presumably the semantics embody) can otherwise be ignored.
Quote from Paul Vernon on December 3, 2021, 10:59 pmQuote from Dave Voorhis on December 3, 2021, 5:36 pmSometimes your use of terminology suggests at least one of us doesn't know what you're doing (:-)), but if you're designing a model or framework for its own sake, or even broadly as guidelines for designing languages -- which is what TTM is -- then there's a certain freedom (though not unlimited freedom) to use or define terminology as you see fit.
Thank you Dave. I certainly desire to be more accurate in my use of terminology - your feedback helps.
Notably, representation of a value tends to refer to internal, implementation-specific concerns -- irrelevant to language syntax or semantics -- unless representation semantics are exposed in some fashion. How literals denote values is (usually) a separate issue of representation.
OK.
I guess I've been using "representation" when using "literal" might have been clearer. I would typically say "encoding" maybe, or "physical representation" when referring to internal, implementation-specific concerns, but that might be untypical.
"Encoding" or "physical representation" is fine for implementation-specific concerns.
Note that a literal also has a physical representation -- a string of UNICODE characters, for example -- that may be implementation-specific or part of the language specification.
Maybe. It's notable that TTM does not specify a language but provides a conceptual framework for a family of languages, of which Tutorial D was only intended as a paper illustration of certain language semantics.
Yes. That is a point I have heard more than once. I suspect I've never really found it a significant one. For me TTM is a specification of a model - a version of the Relational Model of Data. Part of that model is a specification of an algebra of (some minimal set of) operators. Part of the model is principles such as "the Information Principe" etc and other things too. The fact that you could build different languages that conform to the model is - I guess - sort of obvious (to me anyway), and so I'm not sure quite how notable it is.
My other thought here is if Chris and Hugh were too ambitious, or not ambitious enough (or just right) in the structure/goals of TTM? I don't think I can answer that, but I certainly think there is a lot to be said of having a concrete language to feedback into the framework. Tutorial D - I'm sure - has played some of that role, and I'm sure Rel has for it's part too.
In short, TTM is a guide for language designers, not a language design.
But my point was not so much about your language in particular -- though I've (somewhat) addressed that above -- but about philosophy vs (is it "vs", or is it "of"?) computer languages in general: again, the only thing that really matters is what any language does, i.e., its syntax and semantics.
Yes. Syntax and semantics. But then semantics in the broad sense is all about meaning, and it is (I think) unwise to detach philosophy or indeed "reality" from it.
No need, if philosophy or reality or "reality" help you define the semantics (and syntax too).
But ultimately, a language will be judged on its syntax and semantics over its philosophy. If the semantics and syntax make sense, the philosophy (which presumably the semantics embody) can otherwise be ignored.