Should Everyone Learn Computer Science?

After reading the articles, do you believe that coding is the new literacy? Should everyone be exposed or required to take a computer science or coding class?

What are the arguments for and against introducing everyone to computing or programming? What challenges will schools face as this CS4All push moves forward?

How should computer science fit into a typical K-12 curriculum? Is it an elective or a requirement? Does it replace existing subjects or is it an addition? What exactly should be taught in this CS4All curriculum? Is this computational thinking? programming? logic? computer literacy?

Can anyone learn to program? Should everyone learn to program? Explain why or why not to both.

One of the most rewarding things I’ve done as a student at Notre Dame is pair my major in Computer Science with a minor in Philosophy, Politics, and Economics (PPE). PPE, which began at Oxford, focuses on the intersection of the three disciplines and how their various strengths can contribute to improved discussion and analysis of the world at large. Comparing and combining PPE and Computer Science has informed how I’ve evaluated my computer science education in a few ways.

The first has been an understanding of how computer science informs general understanding of thought, in a similar vein as the study of philosophy and mathematics. Studying the theory of computer science especially reveals the nature of problem solving, the ability to quantify the difficulty of problems, and begins to unveil the interconnectedness of many problems, which I believe is essential to a general and abstract understanding of how the world works. I believe that understanding the underpinnings of logic results in a wiser and more prudent individual, which society desperate craves.

The second reason is a greater appreciation for the incredible relevance of computer science. I imagine that many people view computer science in the same vein as I used to view economics and political science. From the outside, I saw these as niche disciplines that inform specific careers but are not as useful in day-to-day living. After taking classes in both fields, however, I was struck by how much of my decision making, whether in purchasing decisions, voting decisions, or future planning, had been influenced by what I had learned. In our society, striving to understand how the economy and our political system work is incredibly important. To try to make decisions without a mental model of these mechanisms is like driving blind.

The growing role of technology in daily life has, in my opinion, risen the importance of studying computer science to at least the level of economics and political science. To be an informed member of society, you need to have a basic understanding of how the Internet and your phone works. Without it, you attempt to make decisions on what to purchase, how to fix problems, and what to do without understanding why. This is already unacceptable and will only become more so as technology becomes increasingly entwined with life.

However, there are challenges to universalizing some study of computer science, and many of them are due to qualities of the field itself. Computer science, perhaps more than any other discipline, relies upon building on others’ work. The goal of coding is often increasing layers of abstraction, and that abstraction can make understanding the underlying principles quite difficult. To have a fundamental understanding of computer is even more difficult: it requires diving deep into the workings of logic gates and registers, which many people are intimidated by.

However, I believe that an introductory understanding of programming languages and how programs are made is helpful; such understanding helps explain bugs and limits in programs. I believe that this kind of understanding will improve the quality of all interactions with technology, since it will no longer be such a black box.

Should Everyone Learn Computer Science?

Internet Trolls

From the readings and from your experience, what exactly is trolling? How does this behavior manifest itself and what are its causes and effects?

What ethical or moral obligations do technology companies have in regards to preventing or suppressing online harassment (such as trolling or stalking)?

Is anonymity on the Internet a blessing or a curse? Are “real name” policies useful or harmful in combating online abuse?

Is trolling a major problem on the Internet? What is your approach to handling trolls? Are you a troll?!?!?

Anonymity is at the heart of many Internet issues, but anonymity itself does not create conflict; it only enables it. Anonymity removes the ramifications for violating the social norms that protect the dignity of all peoples, allowing bigoted, violent, or ignorant individuals to voice their opinions without fear of reprisal. Worse still, the connective nature of the Internet allows those who share prejudiced beliefs to discuss them, reinforcing their backwards views and shielding them from the larger, more tolerant and progressive societal conversation. In addition, the text-only nature of most Internet communication deprives it of contextual information on sincerity and veracity, allowing strangers to inflame other strangers with barely effort at all.

However, anonymity is also one of the Internet’s greatest strengths. In no other medium can one’s identity be so easily veiled without reducing one’s ability to communicate, and this has enabled many conversations that could not otherwise take place. Whether it is whistleblowing on immoral activities, support for lived experiences of discrimination, or organizing protests of unjust situations, the anonymous Internet has been a powerful force for change. I’m not sure that removing the ability to be anonymous in order to combat negative speech is worth the loss of the Internet as pressure-release for oppressive regimes and discriminatory practices. Requiring a sort of identification to follow you around the Internet opens up a whole host of privacy issues, ranging from advertising tracking to corrupt governments hunting down dissidents. It’s an enormous amount of information to place online, and the potential for misuse is enormous.

Trolling, then, represents a manifestation of some of the grosser aspects of humanity, which is allowed to burst forth hatefully because anonymity causes the individual behind the troll to be stereotyped and vilified. Instead of screening out trollish communications, attempts to end trollish behavior should strike at the underlying causes, such as self-esteem issues, prejudices, and pleasure derived from angering others. These psychological issues affect more than just Internet behavior, and so just treating one manifestation of them is simply hiding the symptoms of a deeper problem. If needed, crowd-voted communication platforms such as Reddit and Yik Yak offer ways to screen out trollish comments without completely sacrificing the open forum the products create.

Encountering trolls can be a frustrating experience, because the reality of the human on the other side is obscured and often deliberately ignored. However, the experiences  reminds all of the larger reality that there are people out there whose lives are filled with hate; the Internet just brings us closer to them. We must meet these people with love and compassions, though not necessarily over the same communication: denying trolls of the attention they crave may help to teach them of the uselessness of their comments.

Internet Trolls

Artificial Intelligence

From the readings, what is artificial intelligence and how is it similar or different from what you consider to be human intelligence?

Are AlphaGo, Deep Blue, and Watson proof of the viability of artificial intelligence or are they just interesting tricks or gimmicks?

Is the Turing Test a valid measure of intelligence or is the Chinese Room a good counter argument?

Finally, could a computing system ever be considered a mind? Are humans just biological computers? What are the ethical implications are either idea?

The study of intelligence is one that has confounded nearly every approach from science to sociology. In essence, discussions on intelligence concern themselves with evaluating how a specific actor makes decisions in response to stimuli. Certain response are considered more intelligent than others, and how often those intelligent decisions are chosen determines an actor’s overall intelligence. Already subjectivity is introduced into the definition, for declaring which decisions are intelligent grows exponentially more difficult with the complexity of a situation. Evaluating the decision-making process becomes even more difficult, however, when looking more deeply into how decisions are made; the field of philosophy in particular offers logical arguments against the feasibility of evaluating intelligence. One relevant field of thought is called skepticism.

One of the most famous philosophical statements (in its full form) is Descartes’ “I doubt, therefore I think, therefore I am.” His claim is that we cannot be deluded as to our own existence: if we did not exist, then there would be no target of delusion, therefore we must exist, at the very least so we can doubt our own existence. Descartes’ conclusion reveals a central concept in the study of intelligence: the most informed perspective on a specific intelligence is the intelligence itself. The skeptic takes this conclusion and goes further, claiming that the only intelligence that we are sure exists is our own. A popular expression of this belief is the question, “What if everyone else is a robot?” There is no way to be completely sure that the people surrounding us are not deterministic facsimiles of humans, since we cannot see into their heads.

We can, however, see into the minds of artificial intelligences. But does this actually make it easier to classify and define intelligence? Many intelligence evaluation experiments choose to ignore this access: the Turing test is one of the most famous, and it uses only external signals to infer intelligence. The skeptic denies that passing the Turing test is a result with any real meaning, since there is no way to know how the behavior exhibited during the test came to be. But even if we knew, what can we conclude? Intuitively, we don’t consider a finite state machine, which simply acts on a set of rules given an input, to be an intelligence, no matter how sophisticated. We also intuitively consider ourselves to be intelligent beings, but we aren’t even sure of how we as humans come to decisions! And even if we were, it seems shortsighted to claim that only our method of decision-making qualifies as sufficiently intelligent.

Observation-based approaches such as the Turing test, while useful in order to colloquially call something “intelligent,” fall short of describing a complete definition of intelligence, as shown by the skeptic’s argument outlined above. Some argue that such a definition is not needed, but without one, we are unprepared to answer moral questions on the rights of artificial intelligences and the nature of our existence in comparison. We must investigate our own decision-making process further as well as those of any other intelligences we discover in order to build a stronger catalog of those decision-making processes we intuitively consider intelligent. Only then will we be in a position to decree what qualifies as intelligent.

Artificial Intelligence

Net Neutrality

From the readings, what exactly is Net Neutrality? Explain in your own words the arguments for and against Net Neutrality. After examining the topic, where do you stand on the issues surrounding Net Neutrality?

If you are in favor of Net Neutrality, explain how you would implement or enforce it. How would you respond to concerns about possible over-regulation, burdening corporations, or preventing innovation?

If you are against Net Neutrality, explain why it is unnecessary or undesirable. How would you respond to concerns about providing level playing fields or preventing unfair discrimination by service provides?

In either case, discuss whether or not you consider that “the Internet is a public service and fair access should be a basic right”.

The debate surrounding Net Neutrality is framed by the analogies used to describe it, all of which attempt to paint the underlying economics in a specific way. The core argument is centered around the concept of data being “treated equally.” In essence, Net Neutrality protections prevent internet service providers from granting preferential speeds to data from certain sources, and the removal of those protections legalizes the practice. The most common analogy is one of “fast lanes” and “slow lanes:” without Net Neutrality protections, ISPs can place data from certain sources into the slow lane while leaving their own data in the fast lane. In order to ensure that their data remains in the fast lane, corporations need to pay ISPs for access to the fast lane.

This is problematic, from certain perspectives, for a few reasons. The largest is that some data, such as streaming video and online gaming, drastically loses value to the customer if not delivered in a speedy fashion. If Netflix loads a movie at a snail’s pace, then the Netflix customer experience is substantially depreciated. This is especially alarming when many ISPs offer their own streaming video services that compete with Netflix but are delivered in the aforementioned “fast lane,” thus ensuring a better user experience.

On its own, this is not a singularly disturbing phenomenon: after all, corporations often strive to differentiate products on their “home turf,” such as Apple with Apple Music, Facebook with Messenger, and even grocery stores with integrated gas stations. Such synergy between services is often a boon to users, as it results in a superior product. The issue arises when competition is not present. According to broadband.gov, 96% of Americans do not have access to more than 2 ISP options to choose from. This means that, if both of your potential ISPs decide to place Netflix in the slow lane, you have no alternative but to switch from Netflix to an alternative service. Netflix knows this, which is why they were extorted into entering into an expensive deal with Comcast to ensure that speeds remained strong (http://arstechnica.com/information-technology/2014/04/after-netflix-pays-comcast-speeds-improve-65/).

In an ideal world, Net Neutrality protections would not be needed, because there would be a multitude of ISPs to choose from, and they could differentiate on service speeds, i.e. “Join WebNet for the fastest Netflix speeds around!” The pretense of Netflix is felt as a market force driving the product toward user satisfaction, and users win as a result. Without competition, the situation flips, and instead Netflix and other streaming services must compete for the favor of the ISP, who often offers a competing service as well. As a result, the ISPs get rich off of deals with services such as Netflix without needing to improve the product for the end user. To many economists, this is an example of rent seeking, since the monopoly (or duopoly) ISPs hold prevents the end consumer from access to a better product.

Typically, the government should seek to eliminate situations of rent seeking, since they reflect specific market situations, reduce the overall efficiency of the economy, and concentrate the flow of wealth artificially. Rent seeking also reduces the ability for new entrants into the market: a competitor to Netflix that can’t afford the fast lane is artificially excluded from entering the market by the rent-seeking efforts of the ISP. This, of course, hurts consumers.

There is an economic counter-argument to Net Neutrality protections, however. It begins with the understanding that, if ISPs need to treat all data equally and some services (such as Netflix) send a lot of data, infrastructure improvements will be necessary. The cost of implementing these improvements will be passed on to end consumers, since Net Neutrality protections prevent ISPs from charging data services companies for faster speeds. Thus, as Net Neutrality protections set specifications for how an ISP can use its network, they end up raising the cost of an Internet connection for every user, and not only those who subscribe to Netflix or other data-hungry services. This ends up having the opposite effect than the ideal goal of providing an egalitarian Internet for all, since fewer people can afford a more expensive connection.

Then, further government intervention might be offered as a solution: if the government dictates the price of an Internet connection, then everyone can have an affordably-priced, equal-footing connection. However, this solution drastically reduces the ability for new entrants into the ISP market. Large ISPs with entrenched infrastructures may be able to handle the revenue loss from dictated prices, but smaller ISPs may not. These large ISPs would become de facto government utility providers. This is a dangerous consequence, because it suppresses further innovation in the ISP space. A new ISP that relies on a nascent technology may be more expensive but offer other benefits (such as connectivity flexibility), but in a price-dictated market, such a business would be precluded from being profitable. For example, it is argued that the progress of solar technology has been artificially hampered by price regulations on electricity: if the price of electricity were not as low as the government makes it, solar panels would be a far more attractive option, and research into the field would have been more substantially funded (by both revenue and investors seeking to disrupt the market.) For this reason, society should only dictate price on those goods that we are confident cannot be improved upon, because otherwise we halt the progress of innovation. I am not confident the Internet qualifies as such a mature good.

Net Neutrality

Project 3 Reflection

Is encryption a fundamental right? Should citizens of the US be allowed to have a technology that completely locks out the government?

How important of an issue is encryption to you? Does it affect who you support politically? financially? socially? Should it?

In the struggle between national security and personal privacy, who will win? Are you resigned to a particular future or will you fight for it?

George Orwell’s 1984 described a dystopian society within which nothing was private. Rooms were mic’ed, televisions watched viewers, and even thoughts were policed. There is a reason that Orwell’s society and other fictional societies with similar qualities are met with almost universal repulsion: there is something deeply unsettling about a world in which one cannot keep secrets. The content of the secret is irrelevant; it is the loss of the ability to protect private matters from outside exposure that is intrinsically felt as wrong. I believe that visceral, primal opposition to societies like the one depicted in 1984 reveals that, at some level, all humans appreciate a right to keep things hidden.

The Constitution acknowledges this right to a degree, as the 5th Amendment protects “the right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures.” In today’s age, I would consider the digital records people keep on their phones to be their “papers” or “effects.” I also believe that weakening the encryption of all iPhones (or other electronic device) in order to access data is “unreasonable” and violates the rights of citizens. When the government claims it has a right to all information, we take steps toward Orwell’s vision.

The issue of encryption, while important unto itself, represents a larger struggle between personal liberties and government intervention that technology has brought to the surface. An individual’s stance on encryption indicates either a misunderstanding of how the technology work or an interventionist, often conservative stance on other issues, such as foreign troop deployment, marriage equality, and religious freedom. These are all issues I care deeply about. I am incredibly proud to be working for a company at the forefront of this debate, and I believe that Apple’s involvement in this discourse elevates the work I do for them.

It’s difficult to predict who will “win” the debate between national security and privacy, because I doubt that our society will ever reach an extreme of 1984 or a privacy-oriented counterpart. I do think it is possible to hold back the advance of government: the FBI just announced that it wishes to vacate its case against Apple, proving that fighting back can stem the tide. I am proud to be part of that fight.

Project 3 Reflection

Project 3: A Letter to the Editor on Encryption

By Shuyang Li, Meghan Pfeifer, Andrew Russell, and Zach Waterson

It has been a showdown for the ages: Apple, the most valuable company in the world, stands in the way of an FBI terrorist investigation. What an incredible marketing ploy! Or so it would seem, at least. Today’s debate surrounding encryption is often positioned around specific situations in an attempt to make the conflict easier to understand for outside parties. Indeed, in the national polls used to gauge the degree of support Apple has in its case against the FBI, the following question was posed by Pew:

“As you may know, the FBI has said that accessing the iPhone is an important part of their ongoing investigation into the San Bernardino attacks while Apple has said that unlocking the iPhone could compromise the security of other users’ information.

Do you think Apple:

(1) Should unlock the iPhone

(2) Should not unlock the iPhone?”

This is a biased framing of the debate, for the conflict at heart is far larger than that of a single phone. To recognize why, it is important to have a working understanding of the underpinnings of encryption and how it protects information.

Encryption is an umbrella term for how we keep all our data and communications secure and private in the digital age. It is similar to how militaries used secret codes and machines to secure their communications from their enemies in the past, but in the modern era, it is far more ubiquitous: all major online service providers and device manufacturers now adopt some encryption scheme. When you browse Facebook, your Facebook feed is encrypted, so nobody can snoop on your friends’ updates; when you send an iMessage on your iPhone, your message is encrypted, so no one can read your messages; when you transfer money between bank accounts, your instruction is encrypted too, so hackers cannot modify your instructions to move your money to their accounts.

Another important property of encryption is that all secure encryption algorithms today are based on mathematical principles and can be performed by any computer instead of specially designed machines. This means that it is not possible to weaken the algorithm for just one device or one service: for encryption algorithms to work, every computer must agree on how they can decrypt the code and understand the actual message, otherwise it is impossible to communicate at all. It works exactly like mathematics: we cannot say that 1+1 equals 2 on these devices but equals 3 on this one specific device.

Now that the concept of encryption has been introduced, one can turn to the conflict between Apple and the FBI and evaluate it within a larger context. All iPhones encrypt the data stored on the device using the passcode of the phone as a secret key. Without the passcode, it is impossible (practically speaking) to access the data. So, the FBI is asking Apple to build a custom version of its iPhone software, known as iOS, that has weakened security restrictions, allowing the FBI to guess the passcode of the phone at an artificially accelerated rate without consequence. Apple has multiple objections to this request, but one of its greatest fears is that this custom version of iOS, which it calls GovtOS, could escape into the wild. If that happens, then anyone who steals an iPhone can load GovtOS onto that phone, enabling it to be hacked using the same tool that the FBI wants for itself. No such tool currently exists. The security of the iPhone, which protects user information such as passwords, payment information, and personal data, would be compromised, and users could no longer trust the iPhone to serve as the nexus of their digital lives.

This is the fundamental issue with requests to weaken encryption “only for the good guys:” encryption works by taking advantage of fundamental mathematical principles to obscure data, and one cannot weaken the principles of encryption without compromising them equally for all parties. There is no way to ensure that, if Apple were to secretly build a backdoor for the government to use to unlock iPhones, that only the government would use it. A hole in security is open for both good and evil, and opening a hole should be done with the knowledge that such holes make hacking by criminals easier as well. In an age where hackers already compromise companies and users regularly, is it responsible to make it even easier to hack into personal information?

Unlocking iPhones for the FBI would accomplish nothing more than the weakening of American-designed phones. Criminals and terrorists will still find a way to encrypt their information and communications by purchasing different phones or services (or building their own) that provide stronger encryption. Weakening customer-level encryption would only further increase the risk that your personal information is hacked, which is irresponsible and near-sighted. Imagine a world in which you cannot be sure that your digital information is secure; the implications for the future of technology are outrageous. There are better ways to defend national security; compromising the security of every iPhone user should not be one of them.

Project 3: A Letter to the Editor on Encryption

THE DMCA AND PIRACY

From the readings, what exactly the DMCA say about piracy? What provisions does it have for dealing with infringement? What exactly are the safe-harbor provisions?

Is it ethical or moral for users to download or share copyrighted material? What if they already own a version in another format? What if they were just “sampling” or “testing” the material?

Have you participated in the sharing of copyrighted material? If so, how did you justify your actions (or did you not care)? Moreover, why do you think so many people (regardless of whether or not you do) engage in this behavior even though it is against the law?

Does the emergence of streaming services such as Netflix or Spotify address the problem of piracy, or will are these services not sufficient? Is piracy a solvable problem? Is it a real problem?

 

At its heart, the DMCA is about protecting economic interests. Similarly to patent laws, which in part help to incentivize inventors into creating and publishing their creations, the DMCA applies the notion of copyright to digital data in an attempt to secure economic benefit for content creators. The DMCA is seen as necessary because the extreme ease with which digital data can be copied requires that such copying be regulated to ensure that digital data can in fact be sold. Without any restrictions, a single copy of a media could be distributed to all of the Internet for free, which is a uniquely digital problem.

However, digital copyright restrictions are often cumbersome, overly restrictive, and almost always alienate customers. Such restrictions have no analog in the physical world: no one expects that when you buy one chair, you will get unlimited duplicates of that chair. In many ways, digital data is more malleable than a physical object, and digital copyright restrictions aim to reel those malleabilities in, often with disastrous results. Due to the seemingly alien nature of digital restrictions (applying physical rules to a digital notion), many users fail to believe that circumventing them is against the law. For example, purchasing a DVD of a film allows you to play that DVD to play on any television, lend it to a friend, fast-forward and rewind, etc. Some digital film distribution systems eliminate all of those abilities, shocking users who expect the same levels of freedom that exist in the physical world, often because digital media comes at the same price as physical media.

I do believe that accessing media that you have not purchased for free is illegal: it is stealing like any other. The trouble comes with the notion of copying data you have legally acquired. Personally, I believe that if you purchase a piece of media, you should have the ability to access and manipulate that media in a personal context without restriction. It would not be out of character for me to torrent a film I legally purchased in order to get around overly cumbersome digital copyright restrictions (for example, to watch it offline.)

The counterargument to my stance is that the creator of the media has exclusive rights to its distribution, and so they can levy any restrictions they want on it. I don’t necessarily disagree with this claim, but if that is the case, then I do not believe that it should be illegal to circumvent those restrictions. The content and distribution restrictions should be viewed as one product that the customer can use in whatever way he or she wishes. If content creators want to engage in an arms race with their consumers to alternatively create and crack restrictions, that is an acceptable pattern of behavior that a laissez-faire stance can acknowledge. Market forces allow content creators to compete not only on content but also on the restrictions surrounding its distribution, and customers win in such a competitive environment.

The logical alternative is to have the government regulate and enforce the distribution of digital material. In this scenario, the people have a say in the restrictions on digital content, and market forces are eliminated entirely. What is not acceptable, in my opinion, is when content creators try to have the best of both worlds: to decide upon their own copyright restrictions and get the government to enforce them. This is a difficult scenario for me to morally accept, and so piracy enters the scene as a pressure-release mechanism.

There are already examples of the first scenario being plausible: in the music industry, it has been shown that people are willing to pay for music that is conveniently accessible and will turn away from piracy to do so. At the launch of Apple Music, it was estimated that 7 million people paid for a music subscription service and 20 million pirated music (http://www.latimes.com/business/la-et-ct-state-of-stealing-music-20150620-story.html). This past February, however, Apple announced that Apple Music had over 11 million paying subscribers (Spotify’s numbers also went up), indicating that piracy has not eviscerated the market for music (http://techcrunch.com/2016/02/12/apple-music-tops-11-million-subscribers-icloud-reaches-782-million/). The government regulation scenario exists in monopolized markets in several industries, proving its plausibility.

Piracy helps inform content creators that their restrictions are imposing an economic cost on them, and it should either not be illegal for that cost to be felt by content creators, or the notion of the market should be removed entirely.

THE DMCA AND PIRACY

Online Advertising

From the readings and in your experience, what ethical concerns (if any) do you have with online advertising? How is it performed and what methods are utilized to aggregate and analyze information? Considering the Internet meme that If you are not paying for it, you’re not the customer; you’re the product being sold.

What protections should companies provide over user data? Who owns that data and who controls it? Should companies be able to sell that data to third parties? Should they share the information with the government when requested?

Do you find online advertising too invasive or tolerable? Do you use things like NoScript or Adblock? Why or why not? Is it ethical to use these tools?

 

Free is a good price.”

—Pew Research Study

I found this quote to resonate powerfully with me on the topic of advertising and the monetization of users. I study economics as part of my minor (Philosophy, Politics, and Economics), and a large part of economics is research into the psychology of price. As the quote implies, free is the best price, and it is incredibly difficult to overcome the psychological power of free. There is no linear progression from free to, say, $0.99; rather, there is a huge gap in the desire required to spend even as little as $0.99 for something when there is a free alternative. This trend is visible everywhere; people will drive distances to get free food instead of spending money at a closer location, attend uninteresting events that are handing out free T-shirts, and download poorly designed free apps and games that are plagued with ads.

The Internet is a compelling demonstration of a simple premise: “What if everything you do is recorded?” Such is the reality of the Internet: every click, keypress, and mouseover has the potential to be collected and recorded. Even the amount of time spent on a page can be recorded. Since this information is trivially acquired, it was a natural progression to ask what could be done with such vast quantities of information. And, of course, the answer was monetization. Parallel advances in machine learning have allowed large amounts of small data to render insights into individuals. This process, however, is often opaque to the consumers who are being tracked. However, even when confronted with the truth of how their activity reveals their preferences, which are then exploited for targeted advertising and profit, few people care. “Free” is simply too powerful a word, and when deciding between paying for something and offering one’s clicks and keys to a corporation, the decision is made daily and is obvious.

I get frustrated when people complain about losing privacy in this way; it’s the company’s product, and they can do with it what they want. Being ignorant of the ways in which corporations make money is not justification for indignation at the discovery that your information is being processed and sold, especially when there are alternatives available. This is a different situation than government surveillance; in the former situation, the free market determines the value of such information collection, but there is no free market for governments. If people don’t want Google to collect their information, they don’t have to use Google.

Awareness is important, however, and I do believe that it is unethical for a corporation to collect this data without informing users of the practice. Without complete information, users cannot make informed purchasing decisions. But if presented with two options and given full information on the products, it is the user’s responsibility to decide if their privacy is worth $0.99.

Online Advertising