contextual ethics

10Dec12

Dan Ariely wrote a book about the honest truth about dishonesty. On his blog he comments “The irony of illegally downloading a book on dishonesty is painfully obvious”. He argues that “once we start thinking of ourselves as polluted, there is not much incentive to behave well, and the trip down the slippery slope is likely. But if we have the opportunity to admit and apologize, receive amnesty for the material already illegally obtained, you can start fresh”. Confessing, to articulate what you have done wrong, is an incredibly effective mechanism for resetting your moral compass.teapot creamer The Grand Old Lady – the Waldorf Astoria in NewYork – probably had this in mind when it started it’s amnesty campaign. Although Matt Zolbe, director of market at the Waldorf-Astoria gives two other drivers: “The amnesty program has two goals: to provide us with the elements as we build our archives and to project the Waldorf to Generation Y through social media.”

bookcover_stapeldutch professor who was caught framing and inventing his research data wrote a book in which he reflects on his fraud. The book was received with much dedain since in the eyes of the criticasters he just found another way to make money while making up stories. The book was soon to be found at ShareSend for free (not anymore). The rationalization was that since the professor stole the data for his research-papers, it was justified to ‘steel’ the (online)book. Although the argument: i can do to you – what you did to me is not fully correct, since not all the downloaders will have suffered themselves from the ‘made-up’research-papers, his actions can be seen though as a general attack on our faith in science. A case of double-morals? 1) it is wrong to disgard copyrights 2) it’s allowed to illegally download if we think the beneficiar has impacted our worldview without our consent. In a world where ‘co-creation’ and ‘wisdom of crowds’ is gaining popularity there should be clear rules about how we decide on what is just and what is wrong. We could choose straight but dynamical rules that keep up with the progress of the real world. Or should we decide on more contextual rules – where the interpretation of the rule – depends on the social setting that is it is applied upon.

It is not easy to decide wether we should use black&white rules or a more adaptive-justification dialogue to decide what is right or wrong. But I do know that we should always be aware of our infinite ability to find warped arguments that straightens our crooked views on reality. Another question of course is whether we should we have a global view on ethics in our global world? Or will this always be diluted by our different cultural views?

To make it specific: If you played Phylo, the DNA puzzle that helps solve real medical problems, and your input reveals a new insight for medication. Should the pharmaceutical company pay you for your input? Should they pay the Phylo community? Should you receive bonuspoints from patients using your solution? Should you be freed from paying tax or be granted extra social services since you helped the society? And will it help if we would reward crowd sourcing to steer the world to a more collaborative state? Or should this depend on wether your solution is applied in developed or underdeveloped countries?

knowledgegraphMaybe we could make a kind of ethics-graph, like Google is making his knowledge graphs. In that way even our morality will become a semantic technology. Allthough it does sound scary that our moral is made up by an algoritm, it might be as simple as that you will be asked to periodically answer questions about wright and wrong and from all the answers our global moral code is generated.

citizen science

fold.itZooniverse launched Galaxy Zoo in July 2007, asking its users to help classify galaxies archived by NASA’s Hubble Telescope. The project’s developers expected a small core of committed users to slowly but surely identify the galaxies in the images. It must have been quite a surprise, then, that within 24 hours of launching, Galaxy Zoo was accumulating 70.000 classifications per hour. In the first year, the platform helped crowdsource 50 million galaxy classifications. Zooniverse also introduced Seafloor Explorer, which asks users locate underwater creatures. Given the project’s popularity, Zooniverse decided to apply the concept to other areas of science. Today, it features nine projects and three “laboratories,” all asking for the crowd’s input on a range of topics. These range from studying explosions on the sun to helping researchers understand what whales are saying. Other examples: fold.it, polymathprojects.org

Advertisements


One Response to “contextual ethics”

  1. You should be a part of a contest for one of the greatest
    blogs on the web. I most certainly will highly recommend
    this site!


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: