Monday, September 16, 2024

why I am against statistical significance tests

 Copyright Carl Janssen 2024 September 16

why I am against statistical significance tests

Presenting an informed public with raw data versus using statistical significance without raw data to dupe the public or reasons why I am against statistical significance tests as a over glorified and often more harmful than beneficial standard in academia

This article is an incomplete description of my reasons as there are more.  For example some reasons involving questioning the very idea of there being random events with a probability is not addressed

Although some of the ideas in probability based statistics might have advanced the cause of science more than it harmed it, I suspect that the idea of statistical significance tests have done more to harm the cause of science then to advance it or at the very least using it as a common standard in scientific journal articles and using it as a assumed default research method whenever anything involving measured values is done in the biological and social sciences has done more harm than good.

A common practice in peer reviewed scientific journal articles is to assume data is normally distributed where whether or not data is normally distributed is often not tested, then to arbitrarily assign an alpha values for a statistical significance test, which often is a t-test even though many other types of statistical significance tests are available which could have been used instead and might have made more sense if the data was not normally distributed and or was not quantitative or interval data but was treated like it was.  The public is often then told whether or not the test results are statistically significant based on that t-test in media publications if this is politically expedient to meet their goals but if it is not politically expedient the results might be less likely to be mentioned.  When the data is published the raw data is often excluded and only the mean, sample size and standard deviation are typically presented as a summary which prevents alternative ways to analyze the data mathematically or statistically unnecessarily hindering the advancement of science through suppressing access to data that funding and research already went to.  I would suggest in return for this data suppression the most appropriate response of the public would be too call into legitimacy all conclusions involving statistical significance tests for which all raw data except that which is necessary to protect subject or patient confidentiality is not published.

I believe that if probability based statistics is to be used for public good it would be more useful to simply collect sample data from a population then try to use it to make predictions about the frequency of different outputs or ranges of outputs in different conditions to meet ones goals.  These goals might be different in different circumstances, so I believe that taking raw data and then assigning it a presumed distribution such as a normal distribution with a given mean and standard deviation which is published in a peer reviewed journal but then hiding the raw data from the public is a disservice to science compared with simply giving the raw data and letting the public decide what they want to do with it based on their goals and what type of distribution they think it has.  Of course one might object to publishing the raw data because it violates patient or subject confidentiality in the biological and social sciences.  Will one should then ask does publishing the mean and standard deviation violate confidentiality, to some degree it could but not so much if you remove the subject names.  But I would suggest you can simply publish the raw data from which the mean and standard deviation is calculated from then likewise remove the subject's names and any other personal information through which they might be identified.

Although I do not like the idea of doing statistical significance tests at all, if they must be done then I would suggest only publishing the one and two P-values which would let the public know what alpha values would have worked to achieve one or two tailed significance or not instead of arbitrarily assigning a alpha value and then telling the public whether or not the study was statistically significant based on that arbitrarily preassigned choice of an alpha value. The problem with assigning a alpha values and then telling the public that something is or is not statistically significant is it is extremely misleading to the public when it comes to applications because something might be said to have made no difference which would have made a difference if a different alpha value was assigned or made a difference when it would have made no difference if a different alpha values was assigned.  It also leads to a problem in that journal's are more likely to choose to publish research if a statistical significance has been achieved, so sometimes researchers will repeatedly do the same experiment and then only publish it when a statistical significance has been achieved for a arbitrarily assigned alpha value that might fit the journal's goals for what is acceptable.  This results in extremely biased research that looks like the statistical significance for a certain alpha value is achieved for a certain type of experiment more often than it would be if the times that statistical significance was rejected were also published.  The journal's goals for what alpha values should be used do not necessarily the public's goal which is varied depending on what the individual within the public wants to achieve under what circumstances.

The alpha values in statistical significance tests are arbitrary, if the research is done "blind" then whatever value the researcher assigns should not effect what data was collected, someone could just as well have assigned another value and the data would have changed from significant to not significant according to that alpha value.  I would suggest that if the scientific journal articles author's want to decide that they are going to treat the data like it is a normal distribution for some part of their article's analysis of the data that is fine but they should still publish the raw data and let the public decide whether or not it is a normal distribution.  But as for the alpha value I believe the public would be better off if no alpha values were used at all to decide if the data is statistically significant or not.  Instead I would suggest people would be better off if they gave a two tail P value and an additional one tail P value in whatever direction the one tail T test would succeed in or perhaps two one tail P values one in each direction.  The public would then know what alpha values would have resulted in statistical significance for a two tail t tests or a one tail t test in whatever direction if that direction were assigned.  Depending on the one or two tail alpha values they would desire as the criteria necessary to meet their specific goals they want to accomplish they could then make the decision for what to do based on those one and two tail P values.

How would the public decide what they want to use based on goals.  The closer the alpha value is to zero the less likely they are to accidentally reject the null hypothesis when they "should have" accepted it or failed to reject it.  The farther the alpha value is away from zero the more likely they are to accidentally reject the null hypothesis when they "should have" accepted it or failed to reject it.  There is no perfect alpha values that is identical for all individuals with all goals in all situations, since changing the alpha value does not reduce the chance of making an error but only reduces the chance of making one type of error in exchange for increasing the chance of making another type of error.  In one situation for one individual with a certain goal of avoiding one type of error might be more important than avoiding another type of error and they should choose the alpha value according to their specific goal in that specific situation if alpha values are actually ever applied to anything in real life.

But are alpha values ever actually applied to any honest goal in real life?  I would suggest no.  I would suggest that none of the uses of alpha values encourage a application in a real life situation in meeting a goal other than persuading people.  And that the goal's with alpha values in real life situations that do involve persuading people are never the type of persuasion that is done in a ethical manner that does not involve undue influence.

What are some of these reasons

1 To persuade a journal to publish something not to advance the cause of science but to accumulate more publications for a career goal.  I am not saying that career goals are bad as career goals can be good or bad based on the motive and the results, but I would suggest that both the motives and the results of this career goal are bad because they are misleading the public in exchange for money.

2 To trick someone into doing something based on something being or not being significant according to a journal.

3 To simply make it through an assignment that you have been unduly influenced into doing so that you can prove you know how to do statistical significance tests without thinking about the actual science of things.

4 To boost your ego in a bad way and feel like you have objectively proven something they predicted in advance that is not so clearly and unambiguously proven at all because if it was clear and unambiguous they would not need a statistical significance test to prove it in the first place because a model would exist in which a specific output can be predicted for a specific input using some sort of combination of algebra, trigonometry and calculus equations with no statistical probability theory invoked at all.  Someone might for instance insist they need to assign a one or two tail test and an alpha value before doing the experiment to "eliminate bias' so they can say they called it correctly in advance without bias and boost their ego in a bad way.  I would suggest if they were not invested in their ego in a bad way then they would be comfortable with not needing to say they "called it" or "predicted it" correctly but simply publish the data with the P values but not alpha value as I already decided and let the public come to their own conclusions.  However I would suggest that even publishing what the P values for what a certain type of statistical significance test would be are not necessary because statistical significance tests are not really used for real honest applications and if the scientists really let go of their ego in a good way they would simply publish the raw data of the experiment and let the public do whatever they want with that raw data.

Ok but if you are not going to do statistical significance tests but you are going to claim that maybe using probability based statistics with raw data is good for something then what would the public do.  Let's say there is an experiment and there is data group B which has experiment variation B done to it and data for group A which has experiment variation A done to it.  The public should simply choose the real life application closer to the process in either A or B by counting the number of data values that are closer to the results they want divided by the number of data values and then choose A or B based on that for the application.  This could be for example choosing which one generates a higher percent within a data range or has a higher or lower mean, median, mode or some other function result.  Normally the public wants to get a certain type of values for the results when doing a certain type of action and they should simply choose whether A or B would get those type of results more frequently.  This type of application can often be done with the raw data without knowing whether or not the data has a normal distribution.

For example a percentile chart can be made without ever figuring out what type of distribution something is.  Someone can simply list the raw data and then look at how many data points are above and how many are below a data point to estimate the percentile without knowing what type of distribution the data is.  But instead some people have made this needless complicated and I would propose have not increased the accuracy of the estimation in doing so but decreased it in most cases.  First they calculate the mean and the standard deviation then they hide the raw data.  Next they tell someone how to estimate what percentile they are based on how many standard deviations they are away from the mean, which may result in a different result then estimating the percentile by counting how many points are above and how many points are below that point.

For example let's say you are a shoe salesman and you want to sell the frequency of shoes of a certain size based on the frequency that the public has those shoe sizes.  You could simply count what percent of people in a sample have each shoe size based on the raw data.  Why would you waste your time calculating the mean and then the standard deviation and then hiding the raw data from yourself and using the mean and standard deviation to estimate frequencies for each shoe size that might be less accurate then just using the raw data to count frequencies.

Let's say your are a cardiologist and you have to decide which medicine and what dose to prescribe for a patient and each medicine and dose increases or decreases the blood pressure by a certain amount whether it is as a percentage or as a absolute quantity.  Let's say you want to increase it or decrease the blood pressure by no more than one amount but no less than another amount.  You could simply look at a list of Group A with medicine A at dose A and Group B with medicine B at dose B.  You could then simply count which Group has a higher percent of the listed values that meet the criteria of not modifying the blood pressure too much or too little in whichever direction you want and then picking that medicine.  To do this actual application does not require figuring out whether or not the data is normally distributed and getting a mean and a standard deviation and it certainly does not require removing the raw data so that you can not see it.

Now someone might object that just using the data is problematic because maybe you do not have enough data points and you need to find out if you have enough data points that you can be certain enough of whatever.  And I would simply say I am not objecting to getting more data points.  But you simply have the data points of whatever has been collected and you still have to make a decision.  Sometimes you have to make a decision with what limited data you have and do not have the time or other resources to collect additional data.  And no matter how much or how little data you have using this method is going to be better than burying your head in the sand because you do not have enough data to get a statistically significant result that you feel is powerful enough for an artificially assigned alpha.  

More over I would suggest with this method you can get more data.  If all the raw data was published for past experiments then someone could simply merge the data from replicated past experiments to get a list of data points to use this method instead of using the statistical significance test methods to remove replications of studies that are not statistically significant as historically has been the common practice of many scientific journal articles.  Merging raw data of replications of experiments would result in having enough data points that not enough data points would not be as much of a problem.  On the other hand hiding the raw data and also hiding experiment replications that were not statistically significant would increase the problem of not enough data points to be certain enough.

How could data be merged.  You would not change the data from old experiments.  But let's say there is experiment 1 and experiment 2 and so on and each experiment has data for group A with treatment A and data with group B with treatment B.  So group 1A would be data in experiment 1 with treatment A and group 3B would be data from experiment replication number 3 with treatment B.  You could combine all the listed data from group A for all the replication numbers into a single list and combine all the data from group B in experiment replication numbers in a single list.  By collecting replications of the same experimental treatment in multiple scientific journal articles but this can only be done if raw data is published and can not be properly and correctly done if raw data is removed and only means, standard deviations and statistical significance based on certain alpha values are used.

The pressure to fabricate data.  If students are assigned a homework assignment sometimes they are told to do a statistical significance test.  They might know the prediction the teacher expects and change the data on their homework so that it gets the same outcome in the statistical significance test that they think the teacher wants if they erroneously believe the teacher will give them better grades if the results match what the teacher predicted better, at  least I hope such a belief would be erroneous on the teacher would not subtract points if the results did not match their prediction.

The problem of the ability to fabricate data.  Statistical significance tests are often used where a degree of randomness is assumed.  If randomness is assumed then the results would be assumed to often not replicate the same.  If results are often expected not to replicate the same, then someone could simply not even do an experiment at all and make up data and since replication is not expected because the data is random then no one would be able to argue that the person really did not just make up the data based on this theory of randomness.  I would suggest that we seriously contemplate the possibility that when we read scientific journal articles that people have simply fabricated data without running an experiment at all and that might explain part of the reason why people who try to run the experiment can not get data that is similar enough to match the data in the journal article to consider the results to be replicate-able.

So you can use data and do a lot of stuff to figure out the frequency that results are in certain value ranges that meet or fail to meet your goals based on raw data and in my opinion that is a better application to help the public than doing statistical significance tests with research time and "money" or material resources and although I think that is an improvement I still do not think that is the best use of resources in science.

I would suggest that this statistical way of looking at frequencies that achieve goals based on lists would be better replaced with using putting more emphasis on using equations involving, algebra, trigonometry and calculus that predict a output for a given input.  These equations could assign a margin of error for each input and a range of potential output with a certain margin of error for the input values.  But do we need statistic probability models for margin of error?  No!  If we have a ruler and we have to round to the centimeter then we could assign the maximum and minimum value that the actual distance could be within the range that is reasonable after rounding based on the location of the physical markings on the rulers and no statistics probability distribution models are needed for that.  We would plug in an equation that makes predictions based on the inputs and set the input values that could potential be there considering the margin of error and get predicted output values.  If it is inside the range for the predicted output values then the equation was considered to be a correct prediction and if it is outside the range then it is considered a wrong prediction.  If we find out the equation predicts things incorrectly then we make a new equation that would have predicted the results correctly then rerun the experiment and see if that equation now predicts correct results.

Statistical significance testing often although not always does not have the ability to predict an output for a given input.  I say often but not always because there is an exception called linear regression which allows to predict an output for a given input for example.  Statistics often is only used to predict if two outputs will be different than each other or the same or one will be greater than the other but it is not usually used to predict by how much to outputs will be different from each other by.  You might get a mean and a standard deviation but you usually can not get an equation to guess what the mean and standard deviation will be the next time you run as a function of the input the next time you run the same experiment.

I would suggest the world would be a better place if people focused there research on finding algebra, trigonometry and calculus based equations that work to accurately predict outputs based on inputs within the expected margin of error of the inputs than focusing on experiments that are so poorly designed that you claim the reason you can not make accurate predictions of output values is because of some random variability that limits you into only guessing which group is greater or less than the other group but not by how much.  Although I really do not like statistical significance testing and would consider the so called necessity of statistical significance testing to be a sign that your experiment was poorly designed, I would suggest that there is a place for statistics in science.  Statistics can be a starting place where you have to admit that you really do not have a clue what you are doing and your body of knowledge has not yet achieved a level of a competent science model, one might call this pre science or proto science or primitive science.  Maybe you can use it for a little while if you admit that you do not yet know what you are doing.  But eventually you should move on and make progress with your models to the point where statistrical significance testing is no longer needed and you have a grown up level of competence in that scientific area of study where you can make predictions using algebra, trigonometry and calculus.  This means that as science advances in a field of study more and more algebra, trigonometry and calculus should be used in peer reviewed journal articles and less and less statistical significance testing.  Unfortunately if the trend seems to me seems to be the opposite direction in biological and social sciences that would suggest to me that we are not making progress forward but going backward and the fact that the statistical significance tests were not immediately mocked and abandoned by the community of people who call themselves scientists but instead embraced and pushed on graduate students in most fields of biological and social sciences suggests that in many ways biological and social sciences are in many ways going backwards and not forwards in progress in spite of increased material resources as more electronic tools to store and measure data points are manufactured which would have given further resources to move these so called sciences forward if another type of methodology was used.

Lastly, I would suggest that the so called social sciences might be better if people went back to roots that were less quantitative and so called social sciences were not called sciences at all but were thought of as more like philosophies and religious worldviews about human behavior that might or might not be true.  A person could present a idea about human behavior and the mind and then the audience could simply think and contemplate about whether or not that might be true instead of trying to prove that what they claim is true by presenting the illusion of scientific objectively with the so called scientific process of statistical significance testing.  The types of claims in the so sciences were grand claims that can not be supported by science but are necessary to think about before conducting science in the first place much like different religious or philosophical worldviews about morality, free will, the nature of the human mind and so on.  Before I think about whether or not choosing to do A or B results in whatever output I must presuppose my ability to choose how I run my experiment this is a philosophical pre-requisite for science not science itself.  Social "sciences" have cut themselves short by pretending to be science through the false objectivity of statistical significance testing instead of embracing their grand place as part of philosophy and religion.

Saturday, September 7, 2024

Vacuous Truth or Vacuous Falsehood

 Copyright Carl Janssen 2024 September 7

Vacuous Truth or Vacuous Falsehood


Let's say two days ago, or in other words the day before yesterday, Rob said, "if I do P tomorrow then Q will happen tomorrow"

Now today Samantha says, "Q never occurred yesterday so Rob was lying"

In reply Alexander says, "A statement can only exclusively either be one of two options of true or false but not both and there is no third possibility.  P did not happen yesterday so Rob's statement is not false so that only leaves one other option and this makes Rob's statement vacuously true.  All statements with counterfactual antecedents also called true protasis have true consequents also called true apodosis"

Samantha replies that if Rob said, "if I do P tomorrow then Q will not happen tomorrow would that also have been vacuously true"

Alexander replied, "yes"

Then Samantha said, "then since we know that Rob said if I do P tomorrow then Q will happen tomorrow but we know that it is vacuously true that Q would not have happened tomorrow, this proves that Rob's statement that Q would have happen tomorrow vacuously false because if it is true that Q did not happen that day then it would be false that Q did happen that day" 

Alexander replied, "P did not happen yesterday so there is no evidence that Rob's statement is false and if there is no evidence that it is false then it must be true"

Samantha replied, "The only way for there to be evidence that Rob's statement was true would be if P did happen yesterday and Q also additionally happened yesterday and since there is no evidence that Rob's statement is true then it must be false."

Alexander replied, "Now you got me all confused Samantha.  The statement can only be true or false and only one of those two options and those are the two options.  But a compelling case can be made that it is true and also a compelling case can be made that it is false."

Samantha replied, "Although you are correct that a statement can not be both true and false simultaneously when measured in the same way, the reason for your confusion is because there are more than two options.  You can not know if Rob's statement is true or false because since the claim that P happened yesterday never occurred Robs statement is untested we could try to guess at whether or not Q would have happened if P happened and guess whether the statement would be true or false on that basis but since P never happened his statement would be better described as untested then confirmed to be true or false.  Always using the type of logic that limits things to two options of only true or false really does not line up well with the scientific method because some things although they might in reality be only true or false are untested so we should just label them as untested and make the claim that we do not know they are true or false instead of insisting on assigning them a value of being true or false when we do not know which of those two values is correct.  Also depending on how you look at things maybe there could be other options then true or false.  It is important to keep in mind that being proven true is not the same as being true and being proven false is not the same as being false.  Something can not be both true and correctly proven false if you are measuring with a single consistent standard but something can be both true and not proven true at the same time.  Likewise something can not be both false and correctly proven true if you are measuring with a single consistent standard but something can be both false and not proven false at the same time.  One might argue that under a certain standard there could be two potential values of one kind for a certain statement of either true or false but simultaneously three potential values of another kind of either proven true, untested or proven false.  If something is not proven true then it could be true, untested, proven false or false but it could not be proven true.  If something is not proven false then it could be false, untested, proven true, or true but it could not be proven false.  If something is correctly proven true then it could only be true.  If something is correctly proven false then it could only be false.  If something is true then it could only be true, proven true, untested but it could not be false nor correctly proven false.  If something is false then it could only be false, proven false or untested but it could not be true nor correctly proven true.  If something is untested then it could be true or false but could not be proven true nor proven false.  Now in this standard if something is proven true then it is both true and proven true.  Also in this standard if something is proven false then it is both false and proven false.  This is only one standard of looking at things under another standard there could be more nuanced options then true and false, under such a standard something could not be both simultaneously true and not true but being not true would not always mean it is false because there could be a third option.  Likewise something could not be both simultaneously false and not false but being not false would not always mean it is true because there could be a third option.  It is also important to keep in mind that depending on the view point saying that some object S is not not K is not necessarily the same as saying the object S is object K although it could be the same depending on another view point but maybe one of those two view points might be wrong, but maybe both viewpoints or even additional viewpoints could potentially be right depending on the circumstances.  One of these viewpoints is that if an object S is not not the object K then the object S is the same as the object K, I will not go into detail further on this viewpoint because it is a standard viewpoint.  Now for an unorthodox viewpoint.  Let's say there is a computer programming function where you select a input from a list of three words, spoon, fork or knife and it gives you an output that is one of those three words but is not the word you select.  So if you select spoon you will get an object that is not a spoon such as a fork or a knife.  But if you run the function a second time with the output that you got from the first time you run it you could end up with a spoon again but you could also end up with a fork or a knife as long as it is not the same object as the output that you got the first time you ran the function.  You could say that the function negates your choice, so running it twice is negating your choice then negating it again but you would not necessarily end up with the same choice that you started with even though it is claimed that double negating something ends up where you started.  Perhaps double negating only guarantees consistently ending up where you started if you negate something by selecting a list of every object that is not on your list and not having a single object that is on your list each time when you negate the list.  To be more technical we could talk about every item in a sample space vs every item even items we are not working with and still have it work so long as we never list items outside the sample space when negating and stick with the same sample space to select from neither adding from it or subtracting from it in all operations although I am not sure this is worded correctly because the language is a bit too technical in definitions for me at this point.  I also want to point out that double negating only consistently works this way where you end up with what you originally had if you negate it twice in English when each of the two words that say 'not' are right next to each other with no words in between in a sentence.  For example to say let's assume if something is a dog it is an animal is a statement that is always true but to say if something is not a dog it is not an animal is a statement that is not always true even though two negations were added two the sentence, however to say that if something is not not a dog then it is an animal would be a statement that would be always true based on a certain viewpoint because the two times the word "not" is used each 'not' is next to the other 'not' with no words in between"

Alexander replied, "But how do you apply this"

Samantha replied, "If P happens then Q will happen on the same day has been claimed.  Since P did not happen it has not been confirmed or proven false that Q will happen if P happens but just because it was not proven false does not mean that it is proven true.  Since P did not happen it also has not been confirmed or proven true that Q will happen if P happens.  The statement could be true and the statement could be false but the statement could not be both true and false, the statement is untested and neither proven true nor proven false.  It is important not to confuse true with proven true nor to confuse false with proven false.  It is important to remember that not false does not necessarily mean the same thing as true nor does not true necessarily mean the same thing as false depending on what logic system you are using.  And finally depending on how you try to negate something twice it does not necessarily result in ending up where you would have started with zero negations if that also somehow is related to this confusion although I am not sure if it is.  If there is a list of three or more options when someone thinks there is a list of only two or more options they might choose something that is not object 1 and assume it is object 2 then choose something that is not object 2 as a second negation and assume they are going back to object 1 and undoing the second negation when they could actually lead to object 3 by choosing something that is not that object a second time.  For example if someone says choose something that is not a spoon and they select a knife and then they say now choose something that is not the thing you just selected and they think the only option is a spoon when it could actually mean a fork." 

https://en.wikipedia.org/wiki/Consequent

https://en.wikipedia.org/wiki/Antecedent_(logic)

https://en.wikipedia.org/wiki/Paradoxes_of_material_implication

https://en.wikipedia.org/wiki/Principle_of_explosion

https://en.wikipedia.org/wiki/Double_negation

https://en.wikipedia.org/wiki/Law_of_excluded_middle

https://en.wikipedia.org/wiki/Counterfactual_conditional

https://en.wikipedia.org/wiki/Vacuous_truth

https://en.wikipedia.org/wiki/Three-valued_logic

Saturday, August 24, 2024

Calculating sine of average of angles and cosine of average of angles from tangent of average of angles and other proofs for other trigonometric identities

Copyright Carl Janssen 2024 August 24

Calculating sine of average of angles and cosine of average of angles from tangent of average of angles and other proofs for other trigonometric identities

Explaining with a lot of words

It might be technically better to use the term point instead of vertex

This proof is intended to work for real number values and all values plugged in are intended to be real numbers for the way the proof is written but that does not mean whether or not the end result will or will not work if either complex or pure imaginary numbers are used

All angles mentioned in this proof before the second use of the word "solved" refer to angles measured at the origin relative to a horizontal line y = 0 and a second line segment in quadrant 1 of the unit circle

For positive angles between greater than 0 degrees and less than 90 degrees when the angle at the origin is measured.  Although this proof will in the end work for any real number angles it is easier to visualize it within quadrant 1

If you take two triangles each on the unit circle with a unit-less radius of 1 and a shared Cartesian origin of (0,0) for one of their vertexes and a second vertex of (1,0) and the third or interesting vertex of (cos(angle), sin(angle)) 

And convert the interesting vertex of each triangle into Cartesian coordinates

Then if you take the average of the Cartesian coordinates of each interesting vertex with the other interesting vertex for the two angles you will get a new vertex which is not on the unit circle and has a distance other than 1 from the origin

If you draw a line segment from the origin to the new vertex created by the average already mentioned then that line will have a slope that is the same as a triangle with an angle equal to the average of the two original angles and the length of the line segment will not be 1

Thus you can use the coordinates of that new point to calculate the tangent of the average of the two angles for the original triangles on the unit circle even thought that point is not on the unit circle so it's coordinates can not directly be used to calculate the sine or cosine

If you multiply both horizontal or X and vertical or Y Caartesian coordinates of the new vertex already mentioned by the same constant and select the correct constant you can get coordinates which form a line segment with a length of 1 from the origin with the same slope as the tangent of the average of the two original triangles angles that can be used to form a third triangle on the unit circle.  With this third triangle the sine and cosine can be calculated.  The cosine of the average of the two angles will be the X coordinates of this newest line segment with a length of 1 and the sine will be the Y coordinates of this newest line segment with a length of 1 that has the same slope as that for the average of the two angles in the original triangles.

Explaining using more algebra and less words if you can not understand why what is being done go to the wordy section above or to the diagram if I add it later

Given

tangent(0.5A+0.5B) = ( N * [ 0.5sin(A) + 0.5sin(B) ] ) / ( N * [ 0.5*cos(A) + 0.5*cos(B) ] )

sin(0.5A+0.5B) = N * [ 0.5sin(A) + 0.5sin(B) ]

cos(0.5A+0.5B) = N * [ 0.5cos(A) + 0.5cos(B) ]

Solve for both

sin(0.5A+0.5B) 

cos(0.5A+0.5B)

as functions of these four functions

cos(A), cos(B), sin(A), sin(B)

Solution

This value of N when multiplied by the coordinates of ( 0.5cos(A) + 0.5cos(B),  0.5sin(A) + 0.5sin(B) ) creates coordinates of a vertex which can form a line segment with a length of 1 from the origin (0, 0)

This line segment has a slope that is equivalent to the angle of ( 0.5A + 0.5B ).  N is calculated using the Pythagorean theorem to calculate the distance from the origin (0,0) to ( 0.5cos(A) + 0.5cos(B),  0.5sin(A) + 0.5sin(B) ) and then taking the reciprocal of that distance

N= 1 / ( [ 0.5sin(A) + 0.5sin(B) ] ^ 2 + [ 0.5cos(A) + 0.5cos(B) ] ^ 2 ) ^ 0.5

sin(0.5A+0.5B) = [ 0.5sin(A) + 0.5sin(B) ] * N

cos(0.5A+0.5B) = [ 0.5sin(A) + 0.5sin(B) ] * N

Solved but needs to be simplified to get other trigonometric identities in simplified form

Solving for the square of the sine of a half angle by letting B = 0

sin(0)=0

cos(0)=1

sin(0.5A+ 0 ) ^ 2 = [ 0.5sin(A) + 0.5sin(0) ] ^ 2 * N ^2

[sin(0.5A) ]^2= [0.5sin(A)]^2 / ( [ 0.5sin(A) + 0.5sin(0) ] ^ 2 + [ 0.5cos(A) + 0.5cos(0) ] ^ 2 )

[sin(0.5A) ]^2= [0.5sin(A)]^2 / ( [ 0.5sin(A) ] ^ 2 + [ 0.5cos(A) + 0.5 ] ^ 2 )

[sin(0.5A) ]^2= [0.5sin(A)]^2 / ( [ 0.5sin(A) ] ^ 2 + [ 0.5cos(A) + 0.5 ] ^ 2 )

[ 1 + cos(A) ] ^ 2 = 1 + 2cos(A) + cos(A)^2 = 2 + 2cos(A) - [sin(A)]^2

 [ 0.5cos(A) + 0.5 ] ^ 2 = 0.25 * [ 1 + cos(A) ] ^ 2 = 0.5 + 0.5cos(A) - 0.25[sin(A)]^2

[ 0.5sin(A) ] ^ 2 = 0.25[sin(A)]^2

[sin(0.5A) ]^2= [0.5sin(A)]^2 / ( [ 0.5sin(A) ] ^ 2 + 0.5 + 0.5cos(A) - 0.25[sin(A)]^2 )

(0.5)^2/ 0.5=0.5 = 1 / 2

[sin(0.5A) ]^2= [0.5sin(A)]^2 / [ 0.5 + 0.5cos(A) ] = [sin(A)]^2 /  [ 2 + 2cos(A) ]


https://www.geogebra.org/graphing


Monday, July 29, 2024

Getting something for nothing of value through money

Copyright Carl Janssen 2024

I am publishing this in 2024.  This was in my drafts from an unknown date

Below is all that was in this draft other than the title, "Getting something for nothing of value through money"

Ponzi scheme

money made out of dung

I will add the following comments

Imagine if someone gave you a piece of dung that was even less valuable than normal dung and you did not want to eat or touch or use for tools and you did not even want to burn it or use it for fertilizer.  But they said this piece of dung had value because you could trade it for other things that had value.  They would get something that is valuable  to you through you in exchange for something that has no value in terms of use to you.  You could only trade this worthless dung if you convinced other people that this worthless dung has value and by recruiting new members to believe this worthless dung has value or maintaining the belief of the current members in the value of this worthless dung.  This dung would be a ponzi scheme just like money is a ponzi scheme if you can not use it for anything other than trading it.  The first person to give away this dung got something of value from someone else in exchange for  losing nothing of value to them and every person after them lost the value of whatever they traded to get the dung in exchange for a piece of dung that is of no value to them.  Each person could recover their loss by giving the piece of dung to trade with the next person but they gained nothing of value by giving away their prized possession for getting the piece of dung in return in the first place.  Money is like a game of hot potato where people throw a potato from one person to another and the last person to hold the hot potato loses when the time runs out loses instead of throwing a potato you are throwing around dung that infects the minds of people who  have not developed the mental immune system to see through the lie.


Involuntary taxation forbidden in Catholic Catechism

Copyright Carl Janssen - This will be published right now in 2024 - This was written at an unknown date - It was in my drafts

The Catechism of the Ultramontane Roman Catholic Church approved by "pope" John Paul 2 clearly forbids involuntary taxation which shall be referred to as taxation for short.

Most people say taxation is ok because it is for a greater good but to paraphrase the catechism it is a sin to do an evil action even if it is done to achieve a good result specifically listing murder, theft and lying.  Taxation is lying to claim someone owes a debt for something they did not agree to and did not owe on account of causing harm to a individual or their possession of property.  Taxation is also theft since it is taken something that rightfully belongs to someone else and not you without their properly informed consent as they were lied to and coerced with the threat of violence being done to you possibly resulting in murder if you do not comply.  Taxation involved three cardinal or mortal sins that are still sins even if a good is promised in return such as creating roads, feeding the poor, providing infrastructure, paying for research education or healthcare or hiring people to defend people from violence, etc.

Furthermore taxation involves an absence of the four cardinal virtues of prudence, fortitude, temperance and justice.


Taxation is an absence of prudence because it is a choice not to exercise your conscience to realize Taxation is a violation of the commands not to murdrr, lie and steal.

Taxation is an absence of fortitude because it involves quitting before using the prudence to figure out how to get the good you want without resorting to sin or a lack of perseverance in exercising  temperance if you must deny gratification of achieving a good you want in order to avoid evil.

Taxation is an absence of justice because it involves hearing false witness, stealing from the rightful owners to give to those who do not rightfully own something and  murdering or threatening to murder people who should not be executed.

Taxation is a failure to implement the grace to live a  more virtuous life.

Taxation is also an absence of the theological virtues of faith, hope and charity

It is an absence of faith as it is a failure to profess and witness to God's moral teachings and involves a lifestyle of dead faith.

Taxation is an absence of the virtue of hope because it is a failure to imagine the possibility that things could work for good if one is obedient to God's moral teachings.

Taxation is an absence of charity because murder, lying and theft are opposed to love.

 Taxation involves a boastful arrogance and lack of generosity because it is worse than giving all you have to the poor that you may boast but lacking love.  Taxation is giving what belongs to someone else that you may boast of feign generosity instead of giving what belongs to you.

Taxation is a violation of the fruits of charity of joy, peace and mercy.

Taxation is a violation of joy because coveting is a killjoy

Taxation is a violation of mercy because how much less mercy is there not to harm someone who has done you no wrong then to harm someone who has done you wrong.  Taxation  of all white people or all males, or the Germans, for alleged sins of their ancestors as some people have suggested is a unjust vengeful attitude that lacks mercy.

Taxation is an absence of peace as it involves the threat of violence.  Taxation also stems from anxiety of what will happen if one can not use taxation to get the goods desired.

Taxation is oppositional to beneficence, friendship and communion as it involves reaching out to a third party bureaucratic system for receiving  help instead of developing friendships and using a third party bureaucracy as an excuse not to exercise beneficence towards your neighbor.

Taxation is an absence of benevolence as there is nothing kind about it

Taxation is a violation of the seven gifts of the spirit





Does Ayn rand endorse a lack of empathy?

Copyright Carl Janssen 2024

This was an old draft, all that was in it was the title, "Does Ayn rand endorse a lack of empathy?"

It was written at a unknown date at or prior to 2024.  I do not remember what I was going to say about that but it is probably a question a lot of people have.  For me right now the question of if Ayn Rand endorsed a lack of empathy in the past, I do not really currently feel I have an answer to, I do not know if I thought I had a answer to that question when I wrote the title.

Optimizing violence according to political maps of reality.

Copyright Carl Janssen 2024

The following was in drafts with the exact wording as follows.

"Graph as ordinal data violence imposed by the state vs violence imposed by other sources and danger, suffering and reduced pleasure imposed by lack of state services and bribes"

The following is new content that I am adding in 2024

I think I was going to do a graph of how much different worldviews believe a level of violence from the State would be versus how much violence from non state sources they believe would happen with that level of State violence

The monarchists might believe that once State violence goes below a certain minimum amount that violence from non state sources would increase thus to optimize the minimum amount of total violence from both state and non state sources a particular non zero level of State violence is needed.  The minarchists might believe state violence must be set just right, setting it above or below a certain number would increase the violence from state sources plus non state sources

Some Anarcho capitalists might believe that decreasing State violence to zero will not increase violence from non state sources compared to a higher level of violence.  They would not believe that lowering state violence below a certain amount would guarentee an increase from non state sources of violence.  Thus some anarcho capitalists might believe that in order to minimize the sources of violence from state sources plus non state sources the total amount of violence from state sources should be set to 0

Other people might not care about the level of violence and simply want to maximize the amount of pleasure , comfort and or happiness they have right now and or minimize the amount of pain, suffering and or inconvenience they have right now.  They might be willing to increase the total amount of state and non state violence they receive in order to get better service from government and non government sources

Money oreates economic calculation problems

Copyright Carl Janssen

The following is in drafts I thought I wrote about this somewhere else in more detail but I am publishing it.    I am publishing this in the year 2024 but I do not know what year it was written.  It has at least one typo.  This is exactly how it was written below.

https://en.m.wikipedia.org/wiki/Economic_calculation_problem


https://en.m.wikipedia.org/wiki/Lange_model
Obsfuscates

 You primarily need to know what items you want and which elements (as in the periodic table) form them and then how much energy the physical or chemical changes require.  Most of the things required for day to day living are water, calories, vitamins, protein and dietary elements as well as simple tools or shelter.  The goal is to use something similar in concept to Maslow's ladder

It would not be exponentially increasing unless you want or need an exponentially increasing number of things.although it would be potentially large if you look at the periodic table but many elements are only needed in trace amounts.

Feminist flat shaming skinny women's bodies

https://www.chess.com/forum/view/off-topic/flat-shaming

https://web.archive.org/web/20240729201730/https://www.chess.com/forum/view/off-topic/flat-shaming

https://web.archive.org/web/20180713104703/http://theweek.com/articles/497091/australias-small-breast-ban

Government human experiments


https://m.youtube.com/watch?v=18_ixbpmXOQ

the link above was in drafts, the link above no longer works, the link below is the archived version

https://web.archive.org/web/20170627181259/https://www.youtube.com/watch?v=18_ixbpmXOQ&app=desktop

Money the unsung weapon of mass destruction

https://m.youtube.com/watch?v=l6uLUaqgWY0

The following above was in drafts with the title, "Money the unsung weapon of mass destruction"

Special Relativity Experiments short

 Copyright Carl Janssen 2024 I do not want to delete this content or edit it to remove things but I am not going to finish it.  I will copy ...