THE REFLEXIVE CEILING OF PHILOSOPHICAL SEMANTICS: A STUDY OF INTERNAL TENSIONS WITHIN SEMANTIC THEORY AND ITS LIMITATIONS TO ACCOUNT FOR IDEAL CONDITIONS OF ASSERTABILITY

It is a consensus to locate the origin of the reflexive foundations of modern semantics in Frege's work. Since Frege's distinction between two components of meaning (sense and reference), however, semantics has been forced to lead a double life. Among its first receptions, in Russell's famous article (1905), the first unresolved criticism of this solution was that: It is not possible to split semantics into a theory about two classes of objects without their yielding one and the same thing under lower and higher conditions of instantiation (depending on the function used to identify it). But even Russell could not avoid a crisis. It is not possible to reconcile semantic coordination for a set of non-classical extension of instantiation and encoding (possible instances, counterfactual truth values, etc.) while preserving the classical properties of signification. This article covers these moments with a rough diagnosis: modern semantics has a reflexive ceiling. It is unable to model the contingent features of an "object" without oversizing itself to deal with various constraints on that object adapted to various strategies of intensional and modal specification. In order to model idealized conditions of assertability (Putnam), one must filter the sentences that pass Tarskian test using non-sematic parameters – like the parameter of coherence of a scientific paradigm. It cannot keep that model without stopping being semantic. We conclude with a response to attempts to give semantic status to complex scientific reasoning, and a suggestion as to how to locate the philosophical origin of this claim.

Geltung, vol.2, n. 1, 2022 PRELIMINARIES: SEMANTICS BEFORE AND AFTER THE MECHANICAL AND COMPUTATIONAL STUDY OF LANGUAGES For many, semantics is but the study of meaning.Technically, however, the story is more complicated.Semantikos is indeed the Greek word for 'significant'.But the term has been used historically in literature for many purposes, including reporting on the art of interpreting signs in prophecy.The similarity of the fields denoted by semantics and semiotics is remarkable.The second word comes from the Greek sēmeiōtikos (belonging to signs).Only after linguistics emerged as a mature science as an alternative to those not-mature sciences began semantics to be taken into consideration and applied in the formulation of a mathematical theory of linguistic structure.
Choosing a discourse topic in which "semantics" is already identified in a prescientific manner, even though it is on its way to a strict definition in a distinct context, may be an interesting strategy for avoiding falling into the trap of the "scientific/pre-scientific" dichotomy, as well as avoiding becoming dependent on it.Recently, the term semantic capital has been used to refer to the potential for wealth derived from the acquisition of cultural objects (knowledge, inventions, traditions, ideas, discoveries, languages, arts, etc.) capable of generating meaning in human life: We use this wealth -which I shall define more precisely as semantic capital in the next section -in order to give meaning to, and make sense of, our own existence and the world surrounding us, to define who we are, and to develop an individual and social life.(FLORIDI, 2018, p. 481) The introduction of this concept requires a process of reflection in which "semantics" is not considered as a pre-scientific concept, but as a concept with amateurish content that robs it, at least temporarily, of its technical usage and its ontological identity.In this reflexive dimension, it is necessary to return to the description of the senses in which the acquisition of semantic articles helps to eliminate the lack of meaning in human life.Some questions may help.Is it Geltung, vol.2, n. 1, 2022 (a) in a metaphysical or nihilistic sense in which meaninglessness appears as absurdity or a vacuum of meaning?Is it (b) in a folk-psychological sense, in which meaninglessness appears as madness?Is it (c) in a theological or moral sense, in which meaninglessness appears as the "evil in the world"?Or is it (d) in a linguistic sense, in which meaninglessness appears as the inability to produce computable values that stand for signs?Only the last sense corresponds to the semantics known in the departments of analytic philosophy and structuralist linguistics.Therefore, it is necessary to separate the general term "semantics" in all its fullness of uses from d-semantics, corresponding to its use as an object of linguistic ontology and analytic-logical study.It is in this latter sense that we will approach it to begin the article.But not without an antecipated view to its broader potential.

FREGE AND THE ANALYTIC TRADITION PROFILE OF MEANING PROBLEMATIZING
As there was much ado in the second half of the 19th century about the possibility of reducing mathematics to logic (understood here not as a mere formalism but as a theory of truth), semantics eventually entered philosophy as a tool for thinking about the idealized correlation between the structure of language and its truth value.This brief history, which we will discuss in more detail, shows how semantics, starting from the study of sign interpretation, became the starting point for a new kind of philosophical reflection.Among the prominent authors who engaged in the enterprise of logicism, Frege was the one who produced the most reflexive richness not necessarily related to the enterprise itself.
The concepts that structure Frege's thought are developed and presented throughout his work.We may schematize at least two main approaches in Frege's work: his Begriffsschrift (1879) and Die Grundlagen der Arithmetik (1884) are two examples of works that deal mainly with questions in the field of the Geltung, vol.2, n. 1, 2022 philosophy of logic and mathematics.The other concern we find in his work is that of language, and we can argue that this receives special attention in texts such as Uber Sinn und Bedeutung (1892), Über Begriff und Gegenstand (1892), Funktion und Begriff (1892), Was ist eine Funktion?(1891).Frege's reflections extend to ontological questions, and their main opponents on the shelf of philosophical concepts are psychologism and formalism.But it is his project of reducing arithmetic to logic that influenced him entirely in the direction of a semantic view for the ontological problem of the content of assertions.To make the transition from arithmetic to logic, the German author developed syntactic techniques -such as the function and the argument -to represent the relationship between extensions or instances and propositions.By contributing to the universe of symbolic notation with a representation for quantification, he made a move that has been followed by many branches of science.These include computational theory, logic, and semantics.His effort to reduce parts of mathematics to logic was facilitated by the ability to formalize statements endowed with multiple generalities.Finally, the distinction between sense and reference that Frege proposed in 1892 was crucial to discussions of semantics in modern times, the crucial focus of which is the relationship between language and ontology.According to Frege, every assertive sentence has a sense, i.e. the thought expressed by it, and a reference, i.e. the truth value of the thought.Each sentence with sense and reference consists of one (or more) saturated expression -a name -and an unsaturated expression -a function.(Frege, 1879(Frege, , § 9, 1892(Frege, and 1892-5.) -5.) 1 Our argument is that Frege has placed the discussion of the nature of meaning within the limits of the semantic problem as we understand it as dsemantics.Frege was not the only one whose work contributed to this maturation.However, because of his influence and the scope of his work, it is justified to focus on him as the key figure in the beginning of this process.The author defined the nature of meaning, the crucial philosophical question underlying this enterprise, by illuminating the possibility of symbolizing relations of correlation and consequences, as well as cumulative or slingshot descriptions of the rule expressed by the predicate "is true".For he planted the seed for describing all the ways of composing sentences from a single recursive mechanism, i.e., as a function that maps values to all sentences in that language, defining its possible interactions.
The result was the emergence of a new professional philosophical belief: thinking or representing could be reduced to competence in learning and speaking a language.It is the acquisition of the algorithm that allows one to compose any sentence of a language from a very small knowledge of repeatable structures.The ability to generate sentences without having to know more than their rules of structure and composition revealed the organic coherence of a sign system and explained how these signs can unlock messages with noncontradictory, ambiguous, redundant semantic values, etc.This characterization of the problem worked admirably to shift the focus of philosophy from the epistemological question about the justification of our beliefs to the semantic question, which we can find in Carnap: [t]he 'linguistic' (better, semantic) theory of the a priori […] in the writings of […] Carnap would simply say that (...) when a statement is necessary, it is because its rejection would be no more than a misleading way of rejecting the language (the system of meanings) to which it belongs.(COFFA, 1991, 139) is not the sum of its parts (e.g., 'president' and 'United States'), but a completely different element (here, Joe Biden).With this happy ending to the seed planted by Frege, this piece of philosophical vocabulary -meaning -ceases to be part of the logical mysteries.

Geltung
The transition to its scientific phase is completed.Accordingly, meaninglessness (the absurd, the unsayable, etc.) is no longer an empty title associated with paradoxes, paraconsistency, consistency adjustments, etc., and all these problems are given a technical approach by showing how models, categorical relations between structures, or, more generally, semantic experiments can be performed to prove an assertion or show that it is not proved.

THE REAL CHALLENGES HOLDING PHILOSOPHICAL SEMANTICS
We have chosen to pass over Quine's semantic skepticism, although we are aware of its implications for this discussion.Among the episodes we have skipped in order to maintain the focus of the article, this jump deserves a brief justification.We have done so because Quine's semantic skepticism affects Carnap's positivism without compromising the developmental possibilities of semantics as a later scientific program.The controversy between Carnap and Quine served as a postulate of deliberation on certain fallible features of knowledge of meaning equivalence under richer than extensional conditions 2 .But linguists, unconcerned about the enduring identity of the predicate "blue" across worlds, have not abandoned meaning-determining programs; after all, 2 For an interesting discussion of the similarities and differences between Carnap and Quine on the problem of indeterminacy, see William Berge (1995).The author points out aspects in which Carnap s conception is indifferent to Quine s attacks by "focusing on an example of translational indeterminacy from Camap s Meaning and Necessity; indeed, one which bears a striking resemblance to (and which was published prior to) Quine's radical translation problem" (1995, p. 115).
Geltung, vol.2, n. 1, 2022 enriched languages are not immune from being codified into intensional languages, as Ruth Barcan Marcus has aptly noted: on the level of individuals, one or perhaps two equivalence relations are costumarily present: identity and indiscernibility.This does not preclude the introduction of others such as similarity or congruence, but the strongest of these is identity.(BARCAN MARCUS, 1961, p. 304) Linguistics, cognitive psychology, and other branches of computer science seemed to be the natural heirs of this approach.But even if we had no consensus on the legitimate heirs, we could know that an earlier eldest son had lost priority.Philosophy was losing ground or becoming pure linguistic analysis, a therapy devoted to unraveling the obvious.We will remember in this chapter that this optimism was short-lived.
The described characterization of the problem was accompanied by some Among the so-called Frege-Russell puzzles are the negative existential problem, the problem of informativeness, and the problems with beliefs and other propositional attitude statements.Some of these characterizations, however, are caricatures put on to impress.The negative existential reveals a nuanced aspect of our referential thought, our ability to refer to an object by applying a higher-order rule-such as a propositional function.When we engage in thinking to identify the reference of an utterance, we are, of course, engaging in the same kind of reasoning that is used to distinguish the parameter from directional cues that lead in the same direction when searching for different sources of information on a topic.ability to find a unified and straightforward solution to a theory of meaning for the sentence, and therefore, for a whole language.
Let us tentatively find a solution to the problem of determining semantic value under circumstances of intensional complexity that we think has a intuitive character.The mission is to lose the minimum as possible in semantic interpretation from context to contex of evaluation.We may call it the mission of avoiding losses in translation.Initially, the statement "Homer was a poet" is true (and not false) only in a very narrow evaluative context, namely our world and our history.To disentangle the statement from that narrow context, we can work with reconstructions of what was said.We can change parameters.We can always encode a pressupositional parameter in which we can assert Homer is a poet in the exact conditions in which to say that he is not a poet is false.It is the second encoding that extends the contextual consistency of the first interpretation, and gives semantic -not just indexical -coherence to our claim that Homer is a poet.Only the second encodig provides sufficient extensional security to ensure that no interpretation of Homer is a poet implies the sentence Homer is not a poet when the sentence is uttered in a different evaluative context.
Just as in ordinary language we need to know the non-indexical version of a contextual sentence to avoid losses of meaning and inferential value, we also need an intuitive rationale for modeling the second parameter of sentence meaning.The task is to lose only the aspect of the sentence Homer is a poet that can be interpreted as true and false, that is, only the aspect of the sentence that admits incompatible truth assignments or not aligned systematic maps, or, what is the same, we accept to give up only the non-semantic aspect of the sentence.
Nevertheless, to give the sentence a semantically stable interpretation, we had to increase the mathematical complexity of the decoding rule to protect the sentence from context changes.One can compute or decode those coding additions only by adding rules and structures, i.e., enriching language.
Therefore, our intuitive solution undermines the design of classic semantic experiments -simple computation -to determine the decidability of sentences.
Then we lost something in translation: translation will not be straightforward if we need more rules to interpret something in one language than we needed in the original language.Because the intertranslatable sentences would be revisable by incompatible or concurrent rules.No classical semantic experiment will serve to test both the sentence in one context and in the other context.We failed our mission.We already lost something in translation.Someone would say that at least we have preserved the indistinguishability of the sentences in the extension.That's only fair.The least we can do is preserve this: that Homer is a poet and Homer is not a poet remains incompatible extensionally, or that the rules that identify the difference between those two sentences are extensionally indiscernible.However, even this cannot be guaranteed.There will always be some form of encryption that hacks our protection system in such a way that it is possible to invert a sentence from true to false.The only thing we can do is constant reprogramming to keep our sentences at a place on the Tarskian scale where they are only true -or only false.
By overriding classical semantic experiments, we also complicate simple intuitive concepts like "understanding a sentence."A computer can understand these changes of the parameters only if its coding structures are rewritten.Even if this recourses does not pose a problem for a gifted mathematician, it is unlikely to explain language learning in general, for so much mathematical ingenuity will prove too demanding to explain so well distributed an ability.
One influential line of study in the twentieth century took the route of dividing sentence structure into a surface state and a deep state (which contains more decoding possibilities than purely surface grammar) 3 .However, although this provides assumptions for dealing with this problem, it creates an "ultra-semantization" -non-straightforward semantic conditions -and the same Geltung, vol.2, n. 1, 2022 problems arise from this assumption: we have to design complex semantic testing models for meaning, and then meaning just loses its intuitive and straightforward character.

SEMANTICS DOUBLE-DUTY: INTUITIVE TWO-DIMENSIONALISM AND ITS COMPLEXITY FOR SEMANTICS
We said in the previous chapter that there is no certainty that a change in our evaluation parameters is not radical enough to reverse a proposition from true to false.The liar paradox is the best of the fact that one cannot circumvent the regular grammatical parameters to project a contradiction.Only by constant reprogramming can we keep our propositions on the position of the Tarskian scale in which it is true only if it is nothing but true.However, this entangles us in a complex reasoning that attempts to idealize the conditions of the sentence, i.e., to find a point of stability at which the evaluation of its truth always runs in the opposite direction of the evaluations that would make it false.The reason that makes p true under this ideal condition cannot be insufficient to make not-p false.However, this idealization condition is highly theoretical.There are no data or matter of facts that give complete certainty that anything that supports the proposition p also supports not-not-p, or even rejects support for not-p.To address this problem, Hillary Putnam has developed an ideal state of assertibility we may achieve in stable epistemic states of belief.Truth is "some sort of ideal coherence of our beliefs with each other and with our experiences as those experiences are themselves represented in our belief system" (Putnam, 1981, pp. 49-50).We might construct the ideal parameters for those assertive states in the way an economist adds a ceteris paribius clause to determine the stability conditions of a conclusion under a restriction of possibilities.
Coding the conditions under which the truth value of a model cannot be overturned may seem like a very complex skill.However, we must assume that Geltung, vol.2, n. 1, 2022 it is as intuitive as possible to capture common sense in trivial assertion situations.We must reconcile the complex theoretical ability with intuitivity.
But there is a dilemma here.If we are right, and even common sense is equipped with the means to idealize assertion conditions, we enter a realm of sentence evaluation that is no longer purely semantic -the purely semantic part, which is the diquotation of the sentence in Tarsk's pattern, is only the superficial format in which the sentence is presented after a long theoretical work of idealization that sets the ceteris paribus as a parameter.
The dilemma is that semantics does not fully account for its own phenomena: meaning.As a science, it must make room for phenomena it cannot predict a priori -before it is fixed or programmed by scientific classifications that give it the coherent stability wanted.But how can a science grasp its object only after this object was already grasped by another science?
How can a science allows for its object of study to be objectified two times?This strange condition arose with the philosophical reflection that initiated the transition from the pre-scientific to the scientific phase of meaning research.

According to Davidson:
asks us to suppose that certain verbs, like 'believes', do double duty.Firts, they create a context in which the words that follow come to refer in the usual sense or meaning.Second (assuming the verb is the main verb of the sentence), they perform a normal kind of duty by mapping persons and propositions on to truth values.(DAVIDSON, 2001, p. 14) Since Frege, therefore, semantics is asked to do a double-duty.Russell tried to avoid this consequence by turning the question into a second-order problem about the propositional function that classifies or recognises a superextension.After all, ceteris paribus, the extension of a concept in a given context need not be determined twice if one understands it and has all the information regarding that context.For Russell there is no double duty without it turning into a single duty, or something enigmatic: Geltung, vol.2, n. 1, 2022 it would seem that " C " and C are different entities, such that " C " denotes C ; but this cannot be an explanation, because the relation of " C " to C remains wholly mysterious.(RUSSELL, 1905, p. 487) Everything is artificial here, though.We might as well say that propositional functions may be the technical improvisation to complex predicates present in scientific classification of species and genera: they create the parameter to judge, for example, that fishes are not whales a priori.The incompatibility of fishes and whales is contingent in one extensional condition, but not in the superextensional condition where the "incompatibility between whales and fishes" is necessarily unified by a second-order predicate (or a predicate of a propositional funcion: 'it is impossible that...').So Russell's and Frege's solution were artificial all the same.To rely on these artifices is to allow a community of speakers to access the dubious arsenal of semantic mappings that includes a predicate that is true for the instances that exemplify -according to an artificial parameter -the identity between two true sentences that are false under different conditions -that are revised under different rules.Using too many of these devices to superdimensionalize our semantic possibilities, our language is likely to become a coding frenzy that is less effective for communication and more conducive to deception.What this shows is that we cannot split the meaning study without putting into question semantic's ceiling.This coincides with a lack of consistency in the proof rules used by language users, or a proliferation of confusing proof rules and parameters.Russell shows that one cannot split semantics without creating supersemantical conditions, but neither he or Frege realised that this would collapse semantics.Mapping incoherences would become part of the normal course of business in such a dimension.

SEMANTICS REFLECTIVE CEILING AND ITS DISGUISED IDEALIZATIONS
Geltung, vol.2, n. 1, 2022 We have seen above that the doubling of semantic dimensions does not prevent the formation of a new dimension (and unified one), but that this new dimension cannot be completely separated from its regional consequences, such as the determination of predicates formed within specific scientific (biological, etc.) classifications.The problem is not whether two-dimensionality is successful or unsuccessful, but that this solution threatens the upper limit of semantics as a science.It makes it lose its reflexive upper limit.The upper limit of its reflective framework is threatened when it depends on the delimitation of an object contextualized by the idealized parameters of another science such as pragmatics, hermeneutics or, worse, Biology and physics.For that would require us to deal with parameters that are not straightforwardly computable.
In any case, they become computable only when we have a sufficiently mature background theory that synchronises biology, physics, etc. with our semantic learning.But what would then be left of the semantics?Not too much, because we would need to know the entire canon of scientific production to make sense of our everyday sentences.
The origin of this dilemma is the way in which analytic philosophy since Frege has enabled the conception of the problem.For this tradition, the failure of meaning -in its form of referential ambiguity -can only be dealt with within a reflexive ceiling that limits the diagnosis of the problem.Two-dimensionalism arose from the intuition that semantics must supersize itself, and this is based on Frege's strategy to counter psychologism and formalism: to admit an objective and extra-referential dimension of meaning; with the addition that the result is not that far from the canon of classical meaning properties.Regardless of the result, it must still conveys the properties of repetition, composition, and recursive computability that make meaning and analyticity something tangible without additional requirements (such as empirical additions or indexical collaterals).As a result, the semantic object can only conform to paradigmatic objects of contemporary culture and science by some work of idealization, i.e., by limiting its ceiling through some conceptual coherentist constrain on the Geltung, vol.2, n. 1, 2022 most rewardable possible mappings -i.e., those mappings most apt to not be confronted by incompatible mappings, or most apt to give straightforward Tarskian truth-values to our preferred theories.But this is not a trivial-semantic task.It involves choices that are often not only conceptual but also ideological.
It involves the coherence constraints applied to achieve the state of stability in which our beliefs should arrive, to be enforced in a strategic way.This leads to the following critical scandal: sentences that qualify as admissible substitutions in Tarski's disquotation scheme cannot do so by merely satisfying semantic conditions (as Tarski intended them to).First of all, a sentence only reaches the state on Tarski's meta-linguistic scale where its truthvalue cannot be overruled if we reach the state Putnam described as a condition for rational assertibility.That is why the holistic coherence conditions to occur in this T-scheme involve a certain broader conceptualization.In some theories, "God is infinite" will appear as a possible substitute for the schema; in others, it will not.
More scandalously, complex cognitive modelling properties, such as those involved in determining the truth content of a proposition about the orbital circunference of the moon -or to solve the problem of what would be the case if the moon left orbit -are present in a simple and straightforward way, as illustrated by the computational mapping techniques of model-theoretic semantics for fallible necessary claims or contingent mathematical correlations.
As complex as the subconscious and conscious skills required to learn a language are, it is doubtful that this equates to the skills required to mathematically model the very contingent of the moon's orbit.
The attempt to semanticize the regular behavior of celestial bodies resembles in part the Kantian effort to categorize the truths of Newtonian and Euclidian science.Finding the ideal place to assert the truths of science in a "non-overridable" context is similar to Kant's attempt to determine certain empirical truths a priori.We cite this not as evidence that the difficulty is nor that Kant was wrong -but rather as evidence that the Geltung, vol.2, n. 1, 2022 difficulty is somewhat more radical than theoretical semantics is accustomed to.
It involves a theory about idealized rational assertions (Putnam).More than that, it involves a complex theory like Kant's about human rationality and the ideal conditions of possible experience: In fact it cannot even be seen how there could be a logical principle of rational unity among rules unless a transcendental principle is presupposed, through which such a systematic unity, as pertaining to the object itself, is assumed a priori as necessary" (KrV A 651/ B 679).
Logical positivism, the first heir of Our article, in a first reading, defines itself as noting the difficulties of semantics to overcome its reflexive ceiling.In the technologies created by its founders -Frege and Russell (To name only those with whom we have worked) -we see how changes in the parameters to accommodate second-order predications or predicates of propositional functions lead to an expanded dimensioning of the semantic object and inflate the rules that can be used to predict its meaningful behavior.The double life of meaning, divided into two extensional layers (Russell), or an extensional and intensional layer (Frege/Church/Carnap), led to a crisis.Among those who did not yield to semantic skepticism (Quine) and rejected tests and computations of meanings that could not ultimately be traced to linguistic accommodations to their (hyper-reificational) modal and scientific preferences, there were those who attempted to settle by converting theories of meaning into sentence-forming specifications for one entire language (Davidson, Dummett).
But non-skeptics have to give in too.The more difficulties arise in indexicality, circunstances of evaluation or determination of ingredient sense, the clearer it becomes to us that the search for a parameter capable of doing double duty and identifying the same "meaning" in different dimensions (sense and reference, assertion and ingredient, etc.) does not preserve semantics.The parameter is possible, but it is a complex idealization that fails to preserve the triviality of semantic reasoning.As Putnam teaches, we look for theoretical parameters to assert propositions in a way that is maximally protected from rules that might override the truth or falsity of our proposition as it enters argumentative interactions that presuppose a cumulative projection of meaning.We are looking for an ideal parameter where our sentence is protected from being encoded -or modeled -as false under the same conditions as it is true, passing mixed signs in an inferential system.However, since this is only possible within specifically coherent and stable scientific theories, we can no longer say, as the Tarskian project did, that the conditions under which a sentence is true are purely semantic.
Geltung, vol.2, n. 1, 2022 The second conclusion of the article, contained in its development but which must be made explicit, is the following.The attempt to model empirical content through a metaphysical semantics capable of providing a model for the truth of counterfactual claims is, according to our premises, an artificial project.
It is not only artificial.It is dangerous because it occupies a mysterious position, which we will set out in the sequel.The mentioned project is based on the assumption that it is possible to extend the Frege-Tarskian project to nonclassical cases of semantic models by mapping non-reversible values to expressions such as "it would be the case that" or "it is possible that".Ingenuity and academic resources have ensured that a group of talented scholars (we will not mention their famous names so as not to make the article unnecessarily long) have been able to find stable platforms for modeling the states in which these complex properties of meaning correspond to simple semantic techniques to chart coherent mappings.But that creates more problems than it solves.Now the idea of meaning has been extended to the point where understanding "meanings" coincides with understanding advanced stages of scientific theorizing about causality and other types of non-causal connection, which requires meaning learning of a very specialized nature.In the end, if these two dimensions can be merged, we will be left with the curious conclusion that children can learn complex scientific reasoning as naturally as they learn their native language, or that learning their native language involves algorithms as complex as those used to discover physical connections.
So, the conclusion of the article can be used to draw some lessons.We will propose that the problem arose through a collective and institutional decision of analytic philosophy, initiated by Frege's transformation of the synthetic necessities of mathematics into analytic statements, and continued by Carnap and the plan to enhance positivism with semantic principles: "I believe with Tarski that this is also the sense in which the word 'true' is mostly used both in everyday life and in science."(Carnap, 1949, p. 121) .As a result, one of those collective consensuses spread, and it was deeply rooted.The consensus Geltung, vol.2, n. 1, 2022 referred to the fact that trivial semantic conditions, such as those for learning the algorithm that decodes the sentence structure of a language, are not that trivial.Empirical scientific learning is then equated with language learning.Theoretical semantics would extend to mathematics and the natural sciences, and would be a widely used method for theorizing proof and truth.It would filter science and pseudoscience, support the positivist thesis that metaphysics consists of meaningless propositions, and create a new transcendental xerif for conditioning knowledge of truth.
But the disastrous consequences of this consensus did not die with positivism.In the second half of the twentieth century, with Davidson, this consensus reached another level.For this author, the non-triviality of semantics was related to our ability to distinguish languages whose structure do not undermine our ability to make non-defeatist assertions.Tarski's pattern would provide the elements to filter these languages: "Tarski's truth definitions are not trivial, and they reveal something deep about languages of any serious expressive power" (2001, p.11).This decision set the stage to inoculate linguists and theoreticians of computation (including the ones reflecting on artificial intelligence) with the mindset that there is no scientific, mathematical, or empirical truth that cannot be modeled by semantic strategies.More problematically, with Davidson, the first consensus is transformed into the radical thesis that sentences that are too complex to be theorized by Tarski's scheme are also too complex to be interpreted in a language, eliminating science-technical vocabularies from the class of languages or assimilating all science into that tiny piece of theories of an age that can be expressed in Tarski's way.
Our article tries to provide an answer against those assumptions.The answer is that while certain sentences will be the ones that could be substituted in the Tarskian schema, this is not by its mere semantic or structural properties, but because they reached a highly theoretical state of ideal stability, like Darwinism nowadays.In order to reach this stage, Darwinism has needed to Geltung, vol.2, n. 1, 2022 mature outside of the academic armchair for great periods of time.When one can elevate Darwinism to the prestigious status of "learnable" in the same simple and straightforward way as a child can learn the disquoted sentences of his/her own native language -something has been conquered: Darwin's theorems have been strongly protected within a coherent system that can encompass entire cultural paradigms.
But it is necessary to modestly filter the results of the conquest.This does not mean that we also acquire the philosophical ability to construct this global coherence as a system of categories or a perfect scheme for truth representation.
As a culture, we have the blessing of keeping certain representations of necessary connections stable in a semantic manner.But this can be deceptive.It may appear as a blessing that certain scientific statements appear in lists of what is "learnable" with a structure almost as simple as our native language.We can call it the blessing of disquotation.That is a blessing we can also distrust.
There is no cost to having a healthy dose of distrust for truths that are so easily dettached from quotation marks.Perhaps there is something behind such precious trivialities disguised as blessings.
, vol. 2, n. 1, 2022 After decades of reflective development, this doctrine was explored by linguistics through a systematization of Montague's model theory and continued to form the basis of systematic reflection in David Davidson's program.
recalcitrant conundrums that are still a hallmark of the analytical tradition structure of debate.If we look closely, the presentation of Frege's theory as anticipating the scientific characterization of what it means to "grasp the meaning" is just the story told with a deceiving happy ending.The puzzles are what have really held this tradition together, and they keep alive the suspicion that the innocuous, scientific characterization of computation proposed to solve problems of meaning still needs better substantiation.
Frege's program to reduce the synthetic a priori to the analytic super-coding, already contained the germ of this development, but without the hard part of Kant's theory (his attempts to rescue metaphysics foundations).Already with Frege the idea of sense could be understood as representing the argumentation connected with the attempt to find contingent truths which cannot be turned into false ones.It is quite obvious, and our scientific culture makes it clear, that we can know under what conditions, ceteris paribus, a true proposition can be asserted with no reversibility of its true-value.So we would know a necessary condition to assert a contingent sentence -an contingent-empirical condition that cannot be changed to keep the truth-value of the sentence.We can model this contingent truth and so arrive at a proposition where, all other things being equal, that proposition cannot be transformed into a false one.But in what sense would this be a semantic model, and in what sense would it already be a scientific model?Can we exclude that this model encapsulates a large amount of encripted historical assumptions and human experiences?The logical-positivist tradition of Frege never answered these questions, because it excluded the possibility of a priori synthetic judgments.CONCLUSION Geltung, vol.2, n. 1, 2022 (Russell, 1905)e other direction.For similar reasons, we can identify the referential coordinate of "Russell does not exist" without having to include something like "possible Russell" in our quantification apparatus.But while the form of representation by coordinates seems prosaic and normal, the representation of the negative existential has an added touch of mystery.It seems as if we are dealing with non-existent Meinongian things.Therefore, it's understandable if linguists prefer the second characterization and philosophers the first.In On Denoting(Russell, 1905), the negative existential is presented as a puzzle, but the general nature of the problem and its commonalities with the That same direction is a pure referential Geltung, vol.2, n. 1, 2022 coordinate.othertwo are not deeply illuminated.After more than a century of professional analysis of the problem, we have the right to generalize it on one principle.The controversies generated by the encoding of the Frege-Russell puzzles can be presented as challenges to what Gareth Evans calls the Russell principle: "The principle is that a subject cannot make a judgment about something, unless he knows which object the judgment is about"(Evans, 1982, p. 89)ATTEMPTS TO SOLVE THE CRISIS: THE IMPOSSIBILITY OF RECONCILING THE SECOND DIMENSION OF SEMANTIC THINKING AND A SIMPLE SEMANTIC REPRESENTATION OF THE TRUTH VALUES OF LANGUAGEA single conundrum lies at the center of the whole matter: the notion of Sense or the non-extensional component of meaning.This part of the meaning is not mechanically computable without changing of parameters.In intensional contexts, the parameters used to map the sentence's meaning are systematically ambiguous.They encode assumptions or content that enrich our ability to predict their equality with itself (in different contexts of evaluation), and therefore restricts the concept of equality that would allow one to identify the sentence with a single and objective proposition.These constraints affect our Geltung, vol.2, n. 1, 2022