Welcome to the Podiatry Arena forums

You are currently viewing our podiatry forum as a guest which gives you limited access to view all podiatry discussions and access our other features. By joining our free global community of Podiatrists and other interested foot health care professionals you will have access to post podiatry topics (answer and ask questions), communicate privately with other members, upload content, view attachments, receive a weekly email update of new discussions, access other special features. Registered users do not get displayed the advertisements in posted messages. Registration is fast, simple and absolutely free so please, join our global Podiatry community today!

  1. Have you considered the Clinical Biomechanics Boot Camp Online, for taking it to the next level? See here for more.
    Dismiss Notice
Dismiss Notice
Have you considered the Clinical Biomechanics Boot Camp Online, for taking it to the next level? See here for more.
Dismiss Notice
Have you liked us on Facebook to get our updates? Please do. Click here for our Facebook page.
Dismiss Notice
Do you get the weekly newsletter that Podiatry Arena sends out to update everybody? If not, click here to organise this.

Statistics, Are they a valid test for results of clinical research

Discussion in 'General Issues and Discussion Forum' started by David Smith, Feb 26, 2008.

  1. David Smith

    David Smith Well-Known Member


    Members do not see these Ads. Sign Up.
    Dear All

    When used correctly Statistics can be a useful analytical tool to judge the magnitude and significance of the correlation between variables of interest.

    However correct application of statistical analysis of data requires adherence to rigorous protocols. If the collection protocols of such data are not so rigorous then statistical analysis of such data becomes absurd.

    In clinical trials the cohorts are often small and the definitions for inclusion / exclusion are often vague. Defining certain parameters like 'normal' is difficult and finding enough subjects of the correct type is also difficult.

    Loannidis JPA in his paper 'Why most published research findings are false' PLoS Medicine Aug 2005' appears to identify several reasons reasons for this:

    1) The research question is not valid in the first place and can never be true.

    (EG, My analogy would be; like trying to find the reason for difference in redness of cornflowers when you only collected blue ones and cornflowers never grow red anyway.)

    2) Lack of power and lack of pre and post trial tests for power. Which is often difficult since the research is unique and has a small cohort.

    3) Bias: Bias appears to be a major factor in reducing true statistical significance.
    Bias can often be unavoidable at best and can reduce a P =0.05 or 1 in 20 chance of the results being down to fluke to p=0.33 a 1 in 3 chance of beinf a fluke. Loanidis says then 'Statistical significance can then often reflect the significance of bias rather than the significance of the correlation of interest.'

    This is where Simon Spooner has often pointed out and talked of bias and its effects on statistical analysis of research.

    Apparently biasing can even be built into commercial software product that data mine and actively look for correlated data.

    Researchers may not be experts in statistics and tend to use softeware to analyse the data they collect. Data in one end statistical results out the other.
    We trust that the output reliably characterises the data input in the terms we wish to view it and often just use standardised parameters like P levels at 0.05 to justify significance.

    Is this good practice? Do we put too much weight on statistics nowadays?
    How can we ensure that our research is more robust and reliable in terms of reflecting more accurately the statistical outcome relative to the population parameter.?

    This is what Loannidis ays:


    Cheers Dave
     
    Last edited: Feb 26, 2008
  2. Atlas

    Atlas Well-Known Member

    The quicker the musculoskeletal professions start to seriously question all these statistically significant findings, the quicker these professions can finally move forward.

    When you are in clinical practice, and you apply these "statistically significant findings", and discover that the results don't match the promises, you quickly realise that a majority of academic research is not only helpless to the clinician...it is actually counter-productive.

    For years now, every man and his dog has referred to an article that suggests strapping has no physical effect after 20 minutes.

    For years now, every physiotherapist uses transversus abdominus and core stability as the front line for back pain.


    A literature review that I was shown recently, went back to do a retrospective analysis on the 40+ STATISTICALLY significant research papers on the effectiveness of core stability on low-back-pain. For memory, they found that only 3 of them were CLINICALLY SIGNIFICANT. Again, for memory, they suggested that a-priori outcome measure should be determined before the study (instead of going though post-analysis and finding one that moved in your favour) and several other important reforms. I will try and find the article.
     
    Last edited: Feb 27, 2008
  3. Craig Payne

    Craig Payne Moderator

    Articles:
    8
    I think we are confusing a number of things:

    1. The quality of the methodology - ie elimination of potential bias; the validity & relibility of the tools used; a priori sample size calculation; etc

    2. The generalisability of the results - ie what were in inclusion/exclusion criteria and how well does the sample match the population that the results might be applied to

    3. Statistical analysis - ie in this context its nothing more than a number that gives the probability that the result in the sample would match the population

    4. Clinical vs statistical significance (which I assume is what you may be getting at)

    There is an accumulating body of literature (which I am not too familiar with) on the minimim impoprtance difference to make a difference to a patient. For eg if we are comparing the effect of an intervention vs placebo and using a 10cm VAS as the outcome measure. Lets say that there is a 3mm better pain reduction with the intervention compared to placebo and lets assume that there was a large sample size, so that mean 3mm difference is a statistically significant difference. BUT, if you said to a patient that if I give you this intervention, the research says that you will get a 3mm improvement on the 10cm scale ... they will tell you to bugger off. There is a lot of work going on looking at what are the minimal changes in tools needed before it becomes an important clinically relevant change vs a statistically significant change.

    Hopefully someone who knows more about this than me can chip in. Karl? Joel?
     
  4. David Smith

    David Smith Well-Known Member

    Craig

    Thanks for your reply.

    I have only done minimal study of statistics and only understand the basics (I think). So not being an expert do I know enough about statistics to be able to apply them reliable. There is a lot of complicated theory about statistical logic that goes over my head so how do I know that the statistical analysis is fit for purpose.

    You wrote
    The 'scientific method' lays out a philosophical program for the acceptance of scientific ideas ('hypotheses') based on rejection of null hypotheses. Popper, K. R. 1968. The logic of scientific discovery. Harper and Row, New York, USA

    My point is this: If 1 and 2 are rubbish in terms of the relevance to statistical analysis, then 3 will give you rubbish also.

    You can't rely on statistical analysis of data from the question "How long is a piece of string"? if first you only collected relatively short pieces because you didnt know how long a piece of string could be and in your experience the longest piece of string was only the length of your outstretched arms , secondly you didnt know preciicely what could be defined as string. Next you did not know how many pieces of string there are and nobody had done this experiment before. Lastly you thought you had collected string but in fact it was cord.

    Even if you had collected string and even though you were quite sure about your integrity and honesty in the collection, by the bias caused by your limited knowledge of string length you had not included much longer lengths.

    How then will statistical analysis give you any idea about the true parameter of string in terms of its average length. It might give you a good idea about string length relative to your bias.

    So how can we rely on results from statistically analysed data that has not been perfectly collected despite our best effort.

    How can we improve our data collection so that our statistics will reliably reflect the population parameter?

    Can we ever get to the situation where we can say more than:

    Here is our experiment, This is how we did it, here are the results, This is our conclusion. Here is the statistical analysis, it says that the magnitude of the correlation was high and that the significance was good so that the probability that my results were a fluke are only 1 in 20. But this is only a guide and you will have to make your own mind up as to whether or not you think the data collected was accurate relevant and unbiased in terms of the question.

    P = 0.05 is almost universally adopted nowadays as an indication of significance. Is a 20 to 1 chance good enough for all research. What if the real chance was only 100:1, would the null hypothesis always be accepted incorrectly. Do we pay enough attention to Type 2 error?

    It turns out that even if P(X|H[0]) is small (that is, your P-value is "statistically significant") that the converse, P(H[0]|X) could be quite large, and consequently P(H[A]|X) (the probability of the alternative hypothesis, given the data you collected) would be also quite small, with frequency as low as an order of magnitude smaller than your P-value!. There are three ways around this problem:

    Paying attention to statistical power
    Using maximum likelihood approaches to data analysis
    Ignoring P-values and statistical power completely and using Bayesian approaches to data analysis (Lindley, D. V. 1957. A statisical paradox. Biometrika 44: 187-192)

    Bayesian Probability as opposed the the more usual Frequency probability would appear to make sense to many scientific cliniciaqns who write here.

    Subjective and Logical Bayesian probability:-

    Subjective Bayesian probability interprets 'probability' as 'the degree of belief (or strength of belief) an individual has in the truth of a proposition', and is in that respect subjective. Some people who call themselves Bayesians do not accept this subjectivity. The chief exponents of this objectivist school were Edwin Thompson Jaynes and Harold Jeffreys. Perhaps the main objectivist Bayesian now living is James Berger of Duke University. Jose Bernardo and others accept some degree of subjectivity but believe a need exists for "reference priors" in many practical situations.

    Advocates of logical (or objective epistemic) probability, such as Harold Jeffreys, Rudolf Carnap, Richard Threlkeld Cox and Edwin T. Jaynes, hope to codify techniques whereby any two persons having the same information relevant to the truth of an uncertain proposition would calculate the same probability. Such probabilities are not relative to the person but to the epistemic situation, and thus lie somewhere between subjective and objective. The methods proposed are not without controversy. Critics challenge the claim that there are grounds for preferring one degree of belief over another in the absence of information about the facts to which those beliefs refer. However, these criticisms are usually reconciled once the question one is trying to ask is clear.

    The Controversy between Bayesian and Frequentist Probability
    The use of the term "Bayesian" in regards to specific mathematical methods in probabability theory is often confused with the epistemological debate about the nature of uncertainty. In the sense that Bayesian probability is used in the context of the epistemological debate about the nature of probability - sometimes called credence (i.e. degree of belief) - it contrasts with frequency probability, in which probability is derived from observed frequencies in defined distributions or proportions in populations. The position of the frequentist is that probability has no meaning other than to express relative frequencies over a large number of observations. To a Bayesian (in the epistemological sense), a probability can also mean a subjective level of knowledge. (http://en.wikipedia.org/wiki/Bayesian_probability


    Therefore I can decide on the probabiltiy from the experimental evidence and my level of knowledge about the subject under discussion.

    This sounds OK, perhaps more useful for the unavoidably vague conclusions often drawn in medical and paramedical science, but can I quantify my decision, which is what we all like to / need to or must do nowadays.

    The answer is yes, but then there's a whole new area of probabilty theory to learn
    and I haven't learned the usual one yet. It never ends AAAAGH!


    Cheers Dave
     
    Last edited: Feb 28, 2008
Loading...

Share This Page