I recall many years ago when we all were so excited about the possibility of computers to crunch huge amounts of data and instantly deliver accurate results. For instance, we could take a data set of 10,000 values, instantly visualize it, analyze it, and in a few minutes develop an in situ stoichiometry. Or we could follow a signal and real time determine if we what was happening in a reaction was what we thought was happening.
I also recall a time when everyone realized that computer code had bugs, and those bugs could significantly effect the accuracies of the results that we were taken on faith, trusting the computer in the same way that a student trusts a calculator to magically give them the correct answer on standardized test.
In many ways our faith in our equipment is the same as the faith of the student in their calculator, as we use the results without understanding of the process. In many ways, this is nothing new. Researchers have been purchasing equipment for years without completely understanding what the equipment does. However, the fact that we are now using closed code, or home brewed code, or code that we never review adds another level of uncertainty.
The fact that we worry about such things is not new. In my senior lab class for physics my professor would not accept anything unless I explained what the computer was doing and verified results. A major error in software was discover in the lab I worked in `as we compared out results with other labs doing the same work. This is how science works.
Over the past decade or so there has been an increased realization that we have introduced error into our results not only by unfaithful actors in the research community, but also because our equipment has become so complex, our assumptions so ingrained, that we accept results as valid without verification. In the work we are starting this week, I see this desire for clarity. There are many researchers asking what can we really know by looking at the Photoplethysmogram(PPG) waveform. I also see manufacturers aggressively hiding the code and data, presenting results of their Pulse Oximeter, presenting the results as accurate with no transparency.
That is a good point. As educators, we should be mindful of and emphasize understanding as opposed to getting right answers or results.
Now that your code is published via GitHub, I just can just steal (borrow) your code.