top of page
  • evolveremh

Limits of Science



When does it all end? At what point does humanity stand back and conclude - that is all we can ever know about the universe, about ourselves, about anything. It has been 5-7 million years since we humans have emerged as a ‘dominant’ species on this planet. It has been a few centuries since we discovered what we call modern science. It really is remarkable that starting with a myriad of sticks and stones, we have engineered and created the marvels that we see today. Who would have thought that chiseled stones and broken branches would be transformed to megalithic monuments dedicated to absent deities and artificial veins under the oceans that willingly provide the most redundant of information at the click of a button.


The limits of science would more logically be the limits of human perception and analysis. Our dearly beloved honey bees see only within the spectrum of 300-650nm, essentially the yellows and blues, oblivious to the bright red of rose. Who knows how many more shades and beauty of this planet, we humans miss out on simply because we do not have the senses? Another natural flaw that emerges is our need for generalization. Errors are as familiar to us as truths. Anything that doesn’t follow the pattern of a predictable data set is deemed an error. We have tried to lull ourselves into a false sense of security by creating methods that denote these errors as significant or insignificant – whether the error was due to ‘chance’ or a significant reason to reject our generalization. Chi square analysis, Bayes theorem, Fischer’s exact test and so many others are our attempts to ‘fit’ things into neat little boxes. It would be unwise to say they do not have merit, as these are at their core, tools we have made to understand what little we perceive of this world.


But we do injustice to observations by labelling them as errors, in fact what are errors if not variables that we do not yet understand, or maybe we do but choose to ignore because they mess up our little boxes. And even with these complicated rigorous tools, we still often fail in the face of uncertain hypotheses and diligent reproducibility.


The replication crisis or the reproducibility problem has plagued us for numerous decades now. It is the methodological crisis, where it has been nearly impossible to reproduce results published in many scientific studies. Medicine and psychology are the most affected but the crisis has been reported in almost all fields of natural sciences. The field of metascience has emerged, that hopes to use the scientific method to study science itself and make improvements. Since reproducibility is essential for the scientific method, what are we to conclude from this? What does it signify for the years of research and observation one puts in, nullified by a single reattempt? But alas that is the way of science, one generation builds and the next falsifies - only to rebuild and wait patiently to be broken again. Objective truth is but a fleeting bird always beyond our grasp - science, mathematics, philosophy all originating from the cradle of epistemology yet none the wiser.


In the ever-growing fields of computational science and artificial intelligence, a technological singularity is defined as a hypothetical point when technological advancements occur at uncontrollable, irreversible rates i.e. the intelligence explosion. It is no news that computers already far exceed humans in many aspects. Whether such a singularity would favor or threaten human existence is a topic of much debate. Of course, it is but a hypothetical point, and humans seem awfully adept at threatening their own existence, who knows we might destroy ourselves even before we reach the singularity.

References:



31 views

Comments


bottom of page