Wednesday, January 29, 2020

fighting that knee jerk reaction

I've spent last few days in bioethics and regulation educational training. I have a lot of bioethics, ethics, and regulation background and this was merely some "freshening up" skills. Still though, like a lot of times I wonder a basic question:  Why have a read and seen so much SciFi?

I mean, if you ask me I was always a fantasy person - not a scifi one.

I of course read Tolkien (and a lot of other fantasy writers, long before George R.R. Martin - the ones before him when I was a youngster Llyod Alexander David Eddings Raymond E. Feist Jenny Wurst and Elizabeth Moon ) and while I read some SciFi it was never my thing. It took me until early 20ies when I took bioethics to understand why I didn't fancy reading the Scifi.... It's simplified to say like this but still; "it's too complicated so I don't relax when I read it".

A lot of it deal with "what is human?" (hint, it has to do with soul and that robots don't have one but humans who are created by God does... ) and when you look at some of the bioethics questions today, a lot stems from "is it a human specimen?".

It's still the way I end up explaining my knee jerk reaction to all of the new computer based stuff. The AI, the ML (machine learning), the Telehealth, the big data..... all of it I have a visual of either Terminator world I saw as a young teenager, or the Gattaca (what a movie!) when I started my career as a biotechnologist or Alexa/Echo being in more than 50% of the homes of my friends supporting their next google/amazon search based on previous conversations. I know that I'm weary of this. I know that I think about it more since I work with privacy laws, bioethics, international law and views on a lot of biomedical research and future. 

However, after sitting through a couple of discussions and pitches the last couple of days I had to remind myself - "not to be scared and worried. Everyone else at the table is ok with this. It's only you who wants to disconnect the computer and go back to keeping everything on paper"*

At least I know I'm having a non actionable reaction and need to work through what I'm really concerned about (difference in what "people think can be done" and what "companies sells as being able to be done") and how this might be affected by people who are not working in good faith, and what they can accomplish with stuff that is available out there.

And I have very mixed feelings on AI. Especially after that final lecture on AI at AER2019"The learning is only as good as the training pilot program is devoid of bias." Indeed. My thought would be that there is still a lot to be learned, and there still is a huge value of having some regulations keeping track of what people can do in the meantime - before the next scare/scandal.



*glorious days. SO easy to keep privacy and preventing from exploitation. Of course, there would be no future breakthroughs either so not a viable way going forward...

No comments: