Operations Management Discussion

Operations Management Discussion

Create a discussion thread explain the pros and cons of self driving vehicles. Discuss trolley AV problem.

Connect with a professional writer in 5 simple steps

Please provide as many details about your writing struggle as possible

Academic level of your paper

Type of Paper

When is it due?

How many pages is this assigment?

Editing Self-Image

Algorithms to Harvest the Wind

The BBC micro:bit

Spotify Guilds

Four Internets

Pivot Tracing

Crowdsourcing Moral Machines


Association for Computing Machinery











dgov-first-issue-cacm-01-2020-marks.pdf 1 1/31/20 12:18 PM









dgov-first-issue-cacm-01-2020-marks.pdf 1 1/31/20 12:18 PM









dgov-first-issue-cacm-01-2020-marks.pdf 1 1/31/20 12:18 PM









dgov-first-issue-cacm-01-2020-marks.pdf 1 1/31/20 12:18 PM



Communications of the ACM’s regional special sections— designed to spotlight a region of the world with the goal of

introducing readers to new voices, innovations, and technological research—will feature emerging research and the latest technical advances from East Asia and Oceania next month.

This region includes Japan, Korea, Taiwan, South East Asia (Singapore, Malaysia, Indonesia, Brunei, Vietnam, Thailand, Myanmar, Philippines, Laos, Cambodia), and Oceania (Australia, New Zealand, Papua New Guinea, Fiji, Melanesia, Polynesia, Micronesia).

The section includes a dozen articles that explore the technologies from the region drawing the greatest investment, adoption, and future potential.

Some of the topics on tap include:

• The commercialization of 5G services;

• Digitally enabled healthcare ecosystems;

• Singapore’s quest to achieve a fully smart nation;

• Flagship research projects throughout the region;

• Advances in cybersecurity, data analytics, and finance technologies;

• Technologies for preserving cultural heritage; and,

• Tracing significant government investment in artificial intelligence technologies.

East Asia and Oceania Regional Special Section in April 2020 Issue




















18 Education Computing and Community in Formal Education Culturally responsive computing repurposes computer science education by making it meaningful to not only students, but also to their families and communities. By Michael Lachney and Aman Yadav

22 The Profession of IT Dilemmas of Artificial Intelligence Artificial intelligence has confronted us with a raft of dilemmas that challenge us to decide what values are important in our designs. By Peter J. Denning and Dorothy E. Denning

25 Viewpoint Through the Lens of a Passionate Theoretician Considering the far-reaching and fundamental implications of computing beyond digital computers. By Omer Reingold

28 Viewpoint Four Internets Considering the merits of several models and approaches to Internet governance. By Kieron O’Hara and Wendy Hall

31 Viewpoint Unsafe At Any Level The U.S. NHTSA’s levels of automation are a liability for automated vehicles. By Marc Canellas and Rachel Haga

35 Viewpoint Conferences in an Era of Expensive Carbon Balancing sustainability and science. By Benjamin C. Pierce, Michael Hicks, Crista Lopes, and Jens Palsberg


5 Vardi’s Insights Advancing Computing as a Science and Profession—But to What End? By Moshe Y. Vardi

6 Letters to the Editor Conferences and Carbon Impact

8 BLOG@CACM Coding for Voting Robin K. Hill explains the ethical responsibility of the computing professional with respect to voting systems.

27 Calendar

Last Byte

104 Upstart Puzzles Stopping Tyranny A compromise proposal toward a solution to making it impossible for a would-be tyrant to exceed reasonable authority. By Dennis Shasha


10 Can Nanosheet Transistors Keep Moore’s Law Alive? The technology promises to advance semiconductors and computing, but also introduces new questions and challenges. By Samuel Greengard

13 Algorithms to Harvest the Wind Wake steering can help ever-larger turbines work together more efficiently on wind farms. By Don Monroe

15 Across the Language Barrier Translation devices are getting better at making speech and text understandable in different languages. By Keith Kirkpatrick





03/2020 VOL. 63 NO. 03


38 Securing the Boot Process The hardware root of trust. By Jessie Frazelle

43 Above the Line, Below the Line The resilience of Internet-facing systems relies on what is above the line of representation. By Richard I. Cook



Articles’ development led by













Contributed Articles

48 Crowdsourcing Moral Machines A platform for creating a crowdsourced picture of human opinions on how machines should handle moral dilemmas. By Edmond Awad, Sohan Dsouza, Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan

56 Spotify Guilds When the value increases engagement, engagement increases the value. By Darja Smite, Nils Brede Moe, Marcin Floryan, Georgiana Levinta, and Panagiota Chatzipetrou

62 The BBC micro:bit— From the U.K. to the World A codable computer half the size of a credit card is inspiring students worldwide to develop core computing skills in fun and creative ways. By Jonny Austin, Howard Baker, Thomas Ball, James Devine, Joe Finney, Peli de Halleux, Steve Hodges, Michał Moskal, and Gareth Stockdale

Watch the authors discuss this work in the exclusive Communications video. https://cacm.acm.org/ videos/crowdsourcing- moral-machines

Watch the authors discuss this work in the exclusive Communications video. https://cacm.acm.org/ videos/spotify-guilds

Review Articles

70 Editing Self-Image Technologies for manipulating our digital appearance alter the way the world sees us as well as the way we see ourselves. By Ohad Fried, Jennifer Jacobs, Adam Finkelstein, and Maneesh Agrawala

80 Toward Model-Driven Sustainability Evaluation Exploring the vision of a model-based framework that may enable broader engagement with and informed decision making about sustainability issues. By Jörg Kienzle, Gunter Mussbacher, Benoıt Combemale, Lucy Bastin, Nelly Bencomo, Jean-Michel Bruel, Christoph Becker, Stefanie Betz, Ruzanna Chitchyan, Betty H.C. Cheng, Sonja Klingert, Richard F. Paige, Birgit Penzenstadler, Norbert Seyff, Eugene Syriani, and Colin C. Venters

Research Highlights

93 Technical Perspective A Perspective on Pivot Tracing By Rebecca Isaacs

94 Pivot Tracing: Dynamic Causal Monitoring for Distributed Systems By Jonathan Mace, Ryan Roelke, and Rodrigo Fonseca

Association for Computing Machinery Advancing Computing as a Science & Profession


About the Cover: This month’s cover story explores how to build intelligent machines into moral machines. Case in point: Design autonomous vehicles that respond to emergencies with intelligent and ethical aptitude. As the authors of “Crowdsourcing Moral Machines” contend, it is a challenge that takes a village. Cover illustration by Kollected Studio.



COMMUNICATIONS OF THE ACM Trusted insights for computing’s leading professionals.

Communications of the ACM is the leading monthly print and online magazine for the computing and information technology fields. Communications is recognized as the most trusted and knowledgeable source of industry information for today’s computing professional. Communications brings its readership in-depth coverage of emerging areas of computer science, new trends in information technology, and practical applications. Industry leaders use Communications as a platform to present and debate various technology implications, public policies, engineering challenges, and market trends. The prestige and unmatched reputation that Communications of the ACM enjoys today is built upon a 50-year commitment to high-quality editorial content and a steadfast dedication to advancing the arts, sciences, and applications of information technology.












ACM, the world’s largest educational and scientific computing society, delivers resources that advance computing as a science and profession. ACM provides the computing field’s premier Digital Library and serves its members and the computing profession with leading-edge publications, conferences, and career resources.

Executive Director and CEO Vicki L. Hanson Deputy Executive Director and COO Patricia Ryan Director, Office of Information Systems Wayne Graves Director, Office of Financial Services Darren Ramdin Director, Office of SIG Services Donna Cappo Director, Office of Publications Scott E. Delman

ACM COUNCIL President Cherri M. Pancake Vice-President Elizabeth Churchill Secretary/Treasurer Yannis Ioannidis Past President Alexander L. Wolf Chair, SGB Board Jeff Jortner Co-Chairs, Publications Board Jack Davidson and Joseph Konstan Members-at-Large Gabriele Kotsis; Susan Dumais; Renée McCauley; Claudia Bauzer Mederios; Elizabeth D. Mynatt; Pamela Samuelson; Theo Schlossnagle; Eugene H. Spafford SGB Council Representatives Sarita Adve and Jeanna Neefe Matthews

BOARD CHAIRS Education Board Mehran Sahami and Jane Chu Prey Practitioners Board Terry Coatta

REGIONAL COUNCIL CHAIRS ACM Europe Council Chris Hankin ACM India Council Abhiram Ranade ACM China Council Wenguang Chen

PUBLICATIONS BOARD Co-Chairs Jack Davidson and Joseph Konstan Board Members Phoebe Ayers; Nicole Forsgren; Chris Hankin; Mike Heroux; Nenad Medvidovic; Tulika Mitra; Michael L. Nelson; Sharon Oviatt; Eugene H. Spafford; Stephen N. Spencer; Divesh Srivastava; Robert Walker; Julie R. Williamson

ACM U.S. Technology Policy Office Adam Eisgrau Director of Global Policy and Public Affairs 1701 Pennsylvania Ave NW, Suite 200, Washington, DC 20006 USA T (202) 580-6555; acmpo@acm.org

Computer Science Teachers Association Jake Baskin Executive Director

STAFF DIRECTOR OF PUBLICATIONS Scott E. Delman cacm-publisher@cacm.acm.org

Executive Editor Diane Crawford Managing Editor Thomas E. Lambert Senior Editor Andrew Rosenbloom Senior Editor/News Lawrence M. Fisher Web Editor David Roman Editorial Assistant Danbi Yu

Art Director Andrij Borys Associate Art Director Margaret Gray Assistant Art Director Mia Angelica Balaquiot Production Manager Bernadette Shade Intellectual Property Rights Coordinator Barbara Ryan Advertising Sales Account Manager Ilia Rodriguez

Columnists David Anderson; Michael Cusumano; Peter J. Denning; Mark Guzdial; Thomas Haigh; Leah Hoffmann; Mari Sako; Pamela Samuelson; Marshall Van Alstyne

CONTACT POINTS Copyright permission permissions@hq.acm.org Calendar items calendar@cacm.acm.org Change of address acmhelp@acm.org Letters to the Editor letters@cacm.acm.org

WEBSITE http://cacm.acm.org

WEB BOARD Chair James Landay Board Members Marti Hearst; Jason I. Hong; Jeff Johnson; Wendy E. MacKay

AUTHOR GUIDELINES http://cacm.acm.org/about- communications/author-center

ACM ADVERTISING DEPARTMENT 1601 Broadway, 10th Floor New York, NY 10019-7434 USA T (212) 626-0686 F (212) 869-0481

Advertising Sales Account Manager Ilia Rodriguez ilia.rodriguez@hq.acm.org

Media Kit acmmediasales@acm.org

Association for Computing Machinery (ACM) 1601 Broadway, 10th Floor New York, NY 10019-7434 USA T (212) 869-7440; F (212) 869-0481

EDITORIAL BOARD EDITOR-IN-CHIEF Andrew A. Chien eic@cacm.acm.org Deputy to the Editor-in-Chief Morgan Denlow cacm.deputy.to.eic@gmail.com SENIOR EDITOR Moshe Y. Vardi

NEWS Co-Chairs Marc Snir and Alain Chesnais Board Members Tom Conte; Monica Divitini; Mei Kobayashi; Rajeev Rastogi; François Sillion

VIEWPOINTS Co-Chairs Tim Finin; Susanne E. Hambrusch; John Leslie King; Paul Rosenbloom Board Members Terry Benzel; Michael L. Best; Judith Bishop; Lorrie Cranor; Boi Falting; James Grimmelmann; Mark Guzdial; Haym B. Hirsch; Richard Ladner; Carl Landwehr; Beng Chin Ooi; Francesca Rossi; Len Shustek; Loren Terveen; Marshall Van Alstyne; Jeannette Wing; Susan J. Winter

PRACTICE Co-Chairs Stephen Bourne and Theo Schlossnagle Board Members Eric Allman; Samy Bahra; Peter Bailis; Betsy Beyer; Terry Coatta; Stuart Feldman; Nicole Forsgren; Camille Fournier; Jessie Frazelle; Benjamin Fried; Tom Killalea; Tom Limoncelli; Kate Matsudaira; Marshall Kirk McKusick; Erik Meijer; George Neville-Neil; Jim Waldo; Meredith Whittaker

CONTRIBUTED ARTICLES Co-Chairs James Larus and Gail Murphy Board Members Robert Austin; Kim Bruce; Alan Bundy; Peter Buneman; Jeff Chase; Yannis Ioannidis; Gal A. Kaminka; Ben C. Lee; Igor Markov; Lionel M. Ni; Doina Precup; Shankar Sastry; m.c. schraefel; Ron Shamir; Hannes Werthner; Reinhard Wilhelm

RESEARCH HIGHLIGHTS Co-Chairs Azer Bestavros, Shriram Krishnamurthi, and Orna Kupferman Board Members Martin Abadi; Amr El Abbadi; Animashree Anandkumar; Sanjeev Arora; Michael Backes; Maria-Florina Balcan; David Brooks; Stuart K. Card; Jon Crowcroft; Alexei Efros; Bryan Ford; Alon Halevy; Gernot Heiser; Takeo Igarashi; Srinivasan Keshav; Sven Koenig; Ran Libeskind-Hadas; Karen Liu; Greg Morrisett; Tim Roughgarden; Guy Steele, Jr.; Robert Williamson; Margaret H. Wright; Nicholai Zeldovich; Andreas Zeller

SPECIAL SECTIONS Co-Chairs Sriram Rajamani, Jakob Rehof, and Haibo Chen Board Members Tao Xie; Kenjiro Taura; David Padua

ACM Copyright Notice Copyright © 2020 by Association for Computing Machinery, Inc. (ACM). Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from permissions@hq.acm.org or fax (212) 869-0481.

For other copying of articles that carry a code at the bottom of the first or last page or screen display, copying is permitted provided that the per-copy fee indicated in the code is paid through the Copyright Clearance Center; www.copyright.com.

Subscriptions An annual subscription cost is included in ACM member dues of $99 ($40 of which is allocated to a subscription to Communications); for students, cost is included in $42 dues ($20 of which is allocated to a Communications subscription). A nonmember annual subscription is $269.

ACM Media Advertising Policy Communications of the ACM and other ACM Media publications accept advertising in both print and electronic formats. All advertising in ACM Media publications is at the discretion of ACM and is intended to provide financial support for the various activities and services for ACM members. Current advertising rates can be found by visiting http://www.acm-media.org or by contacting ACM Media Sales at (212) 626-0686.

Single Copies Single copies of Communications of the ACM are available for purchase. Please contact acmhelp@acm.org.

COMMUNICATIONS OF THE ACM (ISSN 0001-0782) is published monthly by ACM Media, 1601 Broadway, 10th Floor New York, NY 10019-7434 USA. Periodicals postage paid at New York, NY 10001, and other mailing offices.

POSTMASTER Please send address changes to Communications of the ACM 1601 Broadway, 10th Floor New York, NY 10019-7434 USA

Printed in the USA.





vardi’s insights

Advancing Computing as a Science and Profession—But to What End?

F OUN D E D IN 1947, the Associa- tion for Computing Machinery (ACM) is the oldest educational and scientific society dedicated to the computing profession.

With over 100,000 members around the world it is also the largest. According it its 1947 Certificate of Incorporation, the purpose of the association was to “ad- vance the science, design, development, construction and application of modern machinery and computing techniques, for performing operations in math- ematics, logic, statistics, accounting, automatic control, and kindred fields.” The narrowness of this purpose was rec- ognized in the ACM Constitution, last changed in 1998, whose Article 2 offers the purpose of “advancing the art, sci- ence, engineering, and application of information technology, serving both professional and public interests by fos- tering the open interchange of informa- tion and by promoting the highest pro- fessional and ethical standards.” ACM’s website at acm.org offers yet a broader description of ACM’s purpose, stating: “Advancing Computing as a Science & Profession—We see a world where com- puting helps solve tomorrow’s prob- lems, where we use our knowledge and skills to advance the profession and make a positive impact.”

One can clearly see a growing com- mitment to the public good between the Certificate of Incorporation, the Constitution, and the descriptive text on ACM’s website. While the latter text is nonbinding and could be seen as “marketing,” the Preamble of ACM’s Code of Ethics states: “Computing pro- fessionals’ actions change the world. To act responsibly, they should reflect upon the wider impacts of their work, consistently supporting the public

good.” So ethical computing profes- sionals have a responsibility to support the public good. But what is ACM’s re- sponsibility to the public good?

This year, we celebrate the 75th an- niversary of “Science, The Endless Fron- tier,” a highly influential report sub- mitted in July 1945 to the President of the United States by Vannevar Bush, an American engineer and science admin- istrator, who during World War II head- ed the U.S. Office of Scientific Research and Development, through which al- most all wartime military research and development was carried out. The re- port, which led to the establishment of the U.S. National Science Foundation, argued that scientific progress is essen- tial to human progress: “Progress in the war against disease depends upon a flow of new scientific knowledge. New products, new industries, and more jobs require continuous additions to knowledge of the laws of nature, and the application of that knowledge to practical purposes. Similarly, our de- fense against aggression demands new knowledge so that we can develop new and improved weapons.” Bush argued, “this essential, new knowledge can be obtained only through basic scientific research” and is “the pacemaker of technological progress.” As such, he concluded it is the role of the Federal Government to support the advance- ment of knowledge. His philosophy can be summarized in one phrase: “Science for the public good.”

Bush’s 1945 vision was recently revis- ited in the article “Science Institutions for a Complex, Fast-Paced World,”a by Marcia McNutt, president of the Nation- al Academy of Sciences, and Michael M.

a https://issues.org/science-institutions/

Crow, president of Arizona State Univer- sity. Writing in Issues in Science and Tech- nology, McNutt and Crow point out that “today’s understanding of how knowl- edge, innovation, economic growth, and social change are all intimately interde- pendent is something of which Bush— and his world—had barely an inkling.” Building on that, they note, “In the past 75 years, the challenges—from nucle- ar proliferation to climate change to wealth concentration to social media’s impact on expertise and truth—that have resulted, at least in part, from so- ciety’s application of scientific advances are now subjects that science itself must directly help to solve.”

McNutt and Crow stress the institu- tions that carried out much of the sci- entific progress over the past 75 years must re-assess their mission and be committed not only to advancing scien- tific knowledge but also to addressing the societal problems that technology, driven by scientific knowledge, has cre- ated. In other words, the commitment to “science for the public good” should be to pursue the public good via science.

Computing professionals, like their colleagues in the sciences, must also ac- cept the challenges of our era. It is time, in other words, to revisit and update the purpose of ACM. It is not enough to fo- cus on science and profession. ACM’s purpose must be “to advance the sci- ence and profession of computing for the public good.” A vigorous discussion and debate on how best to work toward this purpose must now begin.

Follow me on Facebook and Twitter.

Moshe Y. Vardi (vardi@cs.rice.edu) is the Karen Ostrum George Distinguished Service Professor in Computational Engineering and Director of the Ken Kennedy Institute for Information Technology at Rice University, Houston, TX, USA. He is the former Editor-in-Chief of Communications.

DOI:10.1145/3381047 Moshe Y. Vardi




letters to the editor

But if the cause of reducing computing’s carbon footprint excites you, recognize that conference travel is a pittance when compared to the negative climate impact of computing’s power consumption. Our research collaborators’ work of 2019 datacenter global power consumption estimates are nearly double earlier estimates—now 400 TWh! These numbers are a large multiple higher than the best projections based on 2013 data.3 There has been an important major change. These numbers are shockingly large—and worse— they are growing fast. Recent press about hyperscale cloud reveal growth rates of perhaps 40% per year.2

For more, see my broader call to action1 for computing professionals to address computing’s growing and problematic direct environmental impact. Let’s all get moving on this!

References 1. Chien, A. Owning computing’s environmental impact.

Comm. ACM 62, 3 (Mar. 2019), 5. 2. Kniazhevich, N. and Eckhouse, B. Google tops green-

energy buys, BlackRock seen jogging new growth. Bloomberg Green (Jan. 28, 2020); https://bloom. bg/31qdPNt.

3. Shehabi, A. et al. United States Data Center Energy Usage Report. LBNL, June 2016.

Andrew A. Chien, Chicago, IL, USA

Reducing Biases in Clinical Prediction Modeling In “Algorithms, Platforms, and Ethnic Bias” (Nov. 2019), Selena Silva and Mar- tin Kenney visualized a chain of major potential biases. The nine biases, which are not mutually exclusive, indeed must be considered in the design of any data- driven application that may affect indi- viduals, especially if the biases have the potential to negatively affect a person’s health condition.

Users may be slightly affected if they are exposed to irrelevant online adver- tisements or more greatly affected if they are unjustifiably refused a loan at the bank. Even worse would be a poorly designed algorithm that can cause a physician to make a decision that may be harmful to patients. An outdated risk- assessment algorithm can significantly affect many individuals, especially if broadly used. An example of such an al-

M OSHE VARDI MAKES an ex- cellent point in his January 2020 column in noting we, as a community, should do more to reduce carbon

emissions and suggests ACM conferenc- es do more to support remote participa- tion. While I share his concern about car- bon emissions, I have several concerns about his proposals for conferences.

First, time zones often make it difficult to participate in remote events, a problem that is also often faced by members of a distributed development team. At home, I’m nine hours behind Western Europe and about 12.5 behind India, so I would have to join late at night in both cases. That is just not a workable solution for a multiday conference.

Second, my own teaching experience during the past 15 years (plus countless faculty meetings) has repeatedly dem- onstrated that remote participants are less involved. Maybe they are trying (un- successfully) to multitask, but it is sim- ply more difficult for remote attendees to ask questions or join a discussion un- less it is a virtual event where everyone is remote and there is a moderator who recognizes participants in turn.

Third, the experience with online courses (Udacity, edX, among others) suggests material should be presented differently to a remote audience than to a local one. Khan Academy has long taught in 10-minute snippets, perhaps in recognition of the shorter attention spans of its audience. Personally, a brief illness last year caused me to deliver a keynote address remotely. Even though I cut my talk down to half of its original length and used slides, there were fewer questions and less discussion than I would have expected.

Fourth, it’s important for aspiring and junior faculty to personally meet the senior faculty in their specialty F2F. Not only are they colleagues, but they are often valuable for supporting academic promotions. A connection over LinkedIn, even if accepted, falls well short of a personal connection. Vardi recognizes, and I agree, that there is an important social network-

ing aspect to conferences that cannot be satisfied by remote participation.

Finally, conferences need to build their own community to assure their long-term success, including the lead- ership of future years of the confer- ence. While it’s easy to join a program committee remotely, conference and program chairs, as well as other mem- bers of the organizing committees, are more likely to come from repeat attend- ees who have developed personal rela- tionships with conference organizers.

In summary, I’m trying to do my part (home solar panels, electric car) to reduce my carbon impact, but I think there are some difficult issues with Vardi’s proposal. I hope that we can continue the important discussion about our impact on the environment and find some alternative solutions that can address the issues raised here.

Anthony I. Wasserman, Moffett Field, CA, USA

Author’s response Quoting from my column: “Of course, conferences are more than a paper- publishing system. First and foremost, they are vehicles for information sharing, community building, and networking. But these can be decoupled from research publishing, and other disciplines are able to achieve them with much less travel, usually with one major conference per year. Can we reduce the carbon footprint of computing- research publishing?”

Reducing our carbon footprint is an existential imperative. We cannot blindly cling to the way we have been doing things. For some fresh thinking, see, for example, http://uist.acm.org/uist2019/online/

Moshe Y. Vardi, Houston, TX, USA

Response from the Editor-in-Chief The idea that the field of computing could reduce its carbon impact by reducing the prominence of conferences and adopting practices from a number of other scientific fields is a good one, and I applaud Vardi’s column, Wasserman’s response, and other efforts recently highlighted in Communications (for example, see Pierce et al. on p. 35 of this issue.)

Conferences and Carbon Impact DOI:10.1145/3380448




letters to the editor

gorithm is the Model for End-Stage Liver Disease (MELD) score, a risk-assessment algorithm for the liver that has been in use worldwide since 2002. The score was designed based on data captured from an extremely small group of patients and had only three laboratory covariates, which were manually selected, eliminat- ing other potentially predictive covari- ates, such as age and other labs, incorpo- rated into the MELD-Plus score in 2017.

Reduction of biases in the design of clinical prediction modeling is crucial. To achieve such a reduction, it is neces- sary to precisely define the outcome to be predicted; when defining the exact occurrence of a diagnosis or exacerba- tion of a condition, relying on diagnosis codes alone may result in inaccuracy, as has been widely discussed in the medi- cal literature. The date of exacerbation in heart failure, for example, must be defined by at least two independent data elements that are closely captured in time, such as a diagnosis code date and a diuretic prescription, as opposed to merely capturing an admission asso- ciated with the condition with no clear evidence that the primary reason for admission was the patient’s worsening heart. To avoid such biases, for exam- ple, Khurshid et al.1 combined multiple data elements to identify the onset of atrial fibrillation.

To reduce biases even further, anoth- er approach would be to avoid using sub- jectively selected elements. For example, there is great variability in how physi- cians use diagnosis codes to document conditions such as hypertension and type-2 diabetes; such conditions could be defined more precisely based on actu- al lab values (for example, A1C and blood pressure) rather than relying on diagno- sis codes alone. Furthermore, although it is widely known that genetic as well as behavioral variabilities exist across eth- nicities and regions of residence, such data elements must be used with caution when incorporated into predictive risk scores because these factors are not ob- jectively measured as labs and may be co- incidental relative to a medical outcome and not serve as reliable predictors.

Reference 1. Khurshid, S., Keaney, J., Ellinor, P.T., and Lubitz S.A.

A simple and portable algorithm for identifying atrial fibrillation in the electronic medical record. Am. J. Cardiol. (2016).

Uri Kartoun, Cambridge, MA, USA

Where Good Software Management Begins Bertrand Meyer’s critique on a pro- ject’s critical path and Brook’s Mythi- cal Man Month is so laced with pejora- tive themes (Blog@CACM, Jan. 2019); his basic thought that heuristics and mathematical models should always be tailored to the situational context is only laboriously revealed. Mocking and ridiculing the work of earlier practitio- ners negates one’s own ideas, as we all build on yesterday’s results.

Brook’s insight is not a law, but a heu- ristic based on the simple mathemati- cal formula that calculates the variable possible number of channels (Edges) of communications between a given num- ber of people (nodes): C = { N(N-1) } / 2.

Whether ineffective managers blind- ly throw additional money and resourc- es at a project (‘Crashing a project’ in project management nomenclature) is not a fault of Brook’s insight, but a mis- application of the principle.

The Project Management Institute (PMI) has a well-documented Body of Knowledge (PMBOK) including earned value management (EVM), a suite of simple formulas using a common $cost unit of measure across both time and cost units.

One of the initial guidance prin- ciples of PMI and systems engineering is that the rigor and scope of the use of the tools should always be tailored to the particular effort; in other words, you don’t need a shotgun when going to an arm-wrestling contest.

Good software engineering man- agement always calls for intelligent application and balance of cost, scope, and time. If you constrain any one side of this triple constraint, the other two will flex. It’s not rocket science. And, if anything 40 years old is obsolete, we may as well drop Euclidian geometry, after the advent of Einstein’s work and non-Euclidian geometry.

Michael Ayres, San Francisco, CA, USA

Author’s response I am not sure Ayres paid enough attention to what my blog actually says. It is not a “critique” and does not mock anyone. It is the reverse of “pejorative,” that is to say, it is actually laudatory: it brings to the attention of the Communications readership, particularly software project managers, the importance of a key result

reported in Steve McConnell’s 2006 book, pointing out it deserves to be better known. This is its plain goal, not “that heuristics and mathematical models should always be tailored to the situational context” (which, if I understand this sentence correctly, is probably true but not particularly striking and not what I wrote).

“Brooks’ insight is not a law:” True, that’s indeed what my article says, but “Brooks’ Law” is what Brooks himself called it when he introduced it in The Mythical Man-Month.

“Anything 40 years old is obsolete:” Of course not, nor did I imply anything like this. Same thing for the blaming of Brooks’ Law for ineffective managers; my article makes no such representation.

I guess Ayres’s main goal is to highlight the value of the PMBOK, a recommendation that I am happy to endorse.

Bertrand Meyer, Zürich, Switzerland

© 2020 ACM 0001-0782/20/3 $15.00

A Regional Special Section on East Asia and Oceania

Cyber Warranties: Market Fix or Marketing Trick

The Antikythera Mechanism

A Q&A with Mendel Rosenblum

Managing the Hidden Costs of Coordination

Cognition Work of Hypothesis Exploration During Anomaly Response

Plus the latest news about reviving dead languages, tasting technologies, and how universities deploy data.

C om

in g

N ex

t M

on th

in C










Follow us on Twitter at http://twitter.com/blogCACM

The Communications Web site, http://cacm.acm.org, features more than a dozen bloggers in the BLOG@CACM community. In each issue of Communications, we’ll publish selected posts or excerpts.

Let’s probe deeper. This is not about voting laws, or districts, or methods,2 all rich fields of inquiry in their own right. This is about voting procedures as reflected in the design and imple- mentation of software and hardware. Of special concern is voting with elec- tronic assistance. The scope here is the election system as defined by the National Academies report5 [page 13, footnote 5]—roughly, a technology- based system for collecting, process- ing, and storing election data. A spe- cial issue of this publication3 in October 2004 carried several articles on this subject still worth reading, in- cluding the rejection of the SERVE sys- tem4 that put a stop to the optimistic network-voting plans of the time. This discussion also will refer to sections of the ACM Code of Ethics, as a means of taking the Code out for a spin.1

Musing on the peculiarities of voting in the abstract suggests a vote is symbol- ic, discrete, and devoid of connotation; not an act of communication, but an act of declaration, single-shot, unnegotiat- ed, unilateral. Should it exist as an enti- ty; should a vote be preserved somehow? On paper, it does exist as a tally mark. A poll worker could point to it, and even associate it with other descriptions (“the eleventh one” or “the ballot with the bent corner”). A vote may be open to

construal as a first-class artifact (existing on its own, subject to creation, destruc- tion, examination, and modification) that lacks a description or identifier by design. First-class objects can be passed as parameters; votes are passed to tally- ing functions. First-class objects can be compared for equality; that is the salient feature of votes—sameness to or differ- ence from other votes, a stark quality. The voter must give an all-or-nothing choice on each question, no hedging al- lowed. The hierarchy is flat. All votes count equally, so three votes cast in one polling place should be handled as care- fully as thousands from another.

Now to take on the responsibilities of the computing professional, let’s out- line those at play before coding starts.

First responsibility of the computing professional: To understand why trust in voting is critical. Democracy relies on voting to reveal the collective will of the electorate. In the long view, as in the ethics of care,7 background matters and situations cannot be assessed in the moment, but must be viewed in a wider scope in time and place. The Na- tional Research Council published a report in 2006 remarking, “…although elections do determine in the short run who will be the next political leaders of a nation (or state or county or city), they play an even greater role in the long run in establishing the foundation for the long-term governance of a society. Ab- sent legitimacy, democratic govern- ment, which is derived from the will of the people, has no mandate to gov- ern.”6 The report goes on to make the important point that elections must, in

Robin K. Hill Voting, Coding, and the Code http://bit.ly/2t5QQe5 November 27, 2019

Our profession is to be commended for taking steps toward the establishment of computing eth- ics. They may be baby steps (akin to un- stable toddling accompanied by inco- herent babble) or perhaps tween steps (akin to headlong running accompa- nied by giggles, tumbles, and sobs), but steps they are. Let’s consider a fun- damental process critical to democra- cy: Voting. The author is inspired by the sesquicentennial, on December 10th, of the passage of the suffrage act in Wyoming, granting women the right to vote and to hold office. Wyoming was a territory at the time, the first known government body to pass gen- eral and unconditional (and perma- nent) female suffrage well before the 19th Amendment granting national suf- frage, and entered the Union in 1890 as the first state where women could vote.

What is the responsibility of the computing professional with respect to voting systems? The obvious criteria are accuracy in recording and tallying, reliability in uptime, and security from malicious intervention; all of these are needed for the promotion of trust.

Coding for Voting Robin K. Hill explains the ethical responsibility of the computing professional with respect to voting systems.

DOI:10.1145/3379491 http://cacm.acm.org/blogs/blog-cacm





particular, satisfy the losers, preserv- ing the trust that allows them to toler- ate the policies of the winners. Code 2.1: “Professionals should be cogni- zant of any serious negative conse- quences affecting any stakeholder…” Under American standards, loss of faith in democratic government would be a serious negative consequence.

Second responsibility: To know the criteria for an acceptable election sys- tem. These criteria include, as examples, that voting should be easy for everyone; that ballots should present all candi- dates neutrally; that tallying should be computable by the average person; that audits should be possible. Privacy should be secured under all circum- stances (Code 1.6: “Respect privacy,” and 1.7: “Honor Confidentiality”). The result should be dictated by all and only the exact votes cast. Other sources may give somewhat different criteria, but major standards are accepted universal- ly. Life-support systems demand high reliability. Military systems demand high security. Financial transactions de- mand high accuracy. Voting demands all of those. Security looms over all of the Code, and is explicitly mentioned in 2.9: “Design and implement systems that are robustly and usably secure.” Accura- cy, which must also loom over the Code, is not mentioned explicitly. Surely gener- ating wrong answers is the worst trans- gression of a computing professional. References to quality of work must be intended to cover accuracy or correct- ness (Code 2.1, 2.2), as well as basic stan- dards of maintainability, efficiency, and so forth, but we might ask whether cor- rectness is a responsibility that tran- scends these others.

Next responsibility: To interrogate all circumstances, to appreciate the com- plications, and to acknowledge that un- anticipated circumstances will arise. An election system involves many steps of preparation, execution, and resolution, from ballot design and training of poll workers to delivering recounts (and im- proving procedures for the next elec- tion). Complications are rooted in the real-world setting, and the peculiar sta- tus of a vote as anonymous but distinct artifact. Code 2.2: “Professional compe- tence starts with technical knowledge and with awareness of the social context in which their work may be deployed.” Our county clerk’s staff will carry a ballot

outside to a car (advance notice request- ed) for those who cannot easily walk into the polling place. Does that affect the rest of the election system? Code 2.3: “Know and respect existing rules per- taining to professional work.” This could mean the entire local voting code and protocols. If one race is over-voted, does that invalidate the whole ballot? How should a write-in be detected? Un- der what circumstances is a ballot pro- visional? If the wind blows a ballot out the window onto a piece of charcoal that marks it, or under a car tire that punches it, after its assignment to a voter, how is it replaced? Anecdotes in electoral research describe exceptions to the notions conscientious voters mark ballots unambiguously, and error- free methods tally those votes.8 An elec- tion system must accommodate every non-standard circumstance. Voting is a domain where no data point can be dis- missed as “in the noise.”

Thus prepared, the computing pro- fessional can perform the hardware and software design, coding, and testing. All of the Code applies. Afterward, there are other professional obligations.

Final responsibility of the comput- ing professional: To announce and explain vulnerabilities, errors, quirks, and unknowns, and to suggest solu- tions. This responsibility is in service to the main one, trust. Demonstrated full disclosure is the best way to instill confidence that, in the face of no disclo- sure, nothing bad is happening. Code 2.5: “Computing professionals are in a position of trust, and therefore have a special responsibility to provide objec- tive, credible evaluations and testimony to employers, employees, clients, users, and the public.” Code 3.7: “Continual monitoring of how society is using a sys- tem will allow the organization or group to remain consistent with their ethical obligations outlined in the Code.”

As a hypothetical, let’s think of a software engineer who notices the tally is incorrect by a small number of votes that exactly offset each other, an error that makes no difference to the tally, nor to the outcomes of any races. Should that flaw be debugged internal- ly? Of course. Should the incident be made public? Yes, because any prob- lem may result in future distortion, which brings this situation under the requirement of Code 1.2: the “obliga-

tion to report any signs of system risks that might result in harm.” It should be made public as a demonstration that votes are prioritized above tallies. The vote is primary; the tally is derivative. This may have unpleasant repercus- sions to the programmer, but ethical professionals sacrifice themselves be- fore they sacrifice voters.

These responsibilities apply to all who have a hand in American voting, not just computing professionals. Everyone involved should mind Code 2.9: “In cases where misuse or harm are predictable or unavoidable, the best option may be to not implement the system.” The latest National Acad- emies report, among several specific recommendations ranging over many aspects of election systems, recom- mends the Internet not be used for submitting ballots.5

This observer (who claims high in- terest but shallow expertise) concludes voting turns out to be more complicated than was thought in the early days when electronic procedures were broached. Even though it appears to be counting— the simplest computation of all—voting is a process not amenable to automa- tion except where subordinate to the judgment of election officials. We see the ACM Code of Ethics provides broad but cogent guidance for this computing activity, although we would like to see accuracy incorporated explicitly.

References 1. ACM Code 2018 Task Force. June 22, 2018.

ACM Code of Ethics and Professional Conduct. Association for Computing Machinery, https://www.acm.org/code-of-ethics.

2. Brandt, F., Conitzer, V., Endriss, U., Lang, J., and Procaccia, A., editors. 2016. Handbook of Computational Social Choice, Cambridge University Press.

3. Commun. ACM, 47, 10 (Oct. 2004). 4. Jefferson, D., Rubin, A.D., Simon, B., and Wagner, D.

2004. Analyzing Internet Voting Security. Commun. ACM, 47, 10 (Oct. 2004)

5. National Academies of Sciences, Engineering, and Medicine and others. 2018. Securing the Vote: Protecting American Democracy. National Academies Press.

6. National Research Council and others. 2006. Asking the right questions about electronic voting. National Academies Press.

7. Sander-Staudt, M. No publication date given; accessed 24 November 2019. Care Ethics. Internet Encyclopedia of Philosophy and its Authors. ISSN 2161-0002.

8. Wikipedia contributors. 2019. Spoilt vote—Wikipedia. Online; accessed 27 November 2019.

Robin K. Hill is a lecturer in the Department of Computer Science and an affiliate of both the Department of Philosophy and Religious Studies and the Wyoming Institute for Humanities Research at the University of Wyoming. She has been a member of ACM since 1978.

© 2020 ACM 0001-0782/20/3 $15.00














designs with performance boosts of approximately 40% and power con- sumption cuts of 75%. Samsung an- nounced in May last year it had perfect- ed nanosheet transistors and would be introducing them commercially in the first half of this year. “It’s a huge

T HE COMPUTING WORLD has al- ways relied on advances in semiconductors. Over the decades, smaller and more efficient transistor designs

have produced faster, more powerful, more energy-efficient microchips. This has fueled incredible advances in everything from supercomputing and clouds to smartphones, robot- ics, virtual reality, augmented reality, additive fabrication, and the Internet of Things (IoT).

The march toward more sophisticat- ed microprocessors has continued un- abated for decades. However, Moore’s Law, which states the number of tran- sistors in an integrated circuit doubles approximately every one-and-a-half to two years, has begun to slow in recent years. The reason? It has become more difficult to use MOSFET (metal-oxide- semiconductor field-effect transistors) scaling techniques to achieve contin- ued miniaturization. Many chips now contain 20 billion or more switches. Engineers are running into enormous challenges as they reach the physical limits of existing technology.

However, an emerging technol- ogy promises to change the equation. Nanosheet transistors, which also go by the names gate-all-around, multi- bridge channel, and nanobeam, push beyond today’s 7-nanometer (nm) node and into more-advanced 5 nm

Can Nanosheet Transistors Keep Moore’s Law Alive? The technology promises to advance semiconductors and computing, but also introduces new questions and challenges.

Science | DOI:10.1145/3379493 Samuel Greengard

A 2017 scan of the IBM Research Alliance’s 5nm silicon nanosheet transistor containing 30 billion switches.






advance in the device structure itself. It will enable significant advances in computing,” says Mukesh V. Khare, a vice president at IBM Research.

Miniaturization Matters Moore’s Law has served the semicon- ductor industry well since Intel co- founder Gordon Moore introduced the idea in 1965. Just over a half-century later, transistor designs appear to fi- nally be approaching their physical limits—at least using current materi- als and designs. “We are reaching a quantum threshold where the transis- tors cannot get a lot smaller and we cannot keep on achieving gains at the speed of Moore’s Law,” explains Peide Ye, Richard J. and Mary Jo Schwartz Professor of Electrical and Computer Engineering at Purdue University.

Current transistors use a time-prov- en design based on MOSFET technol- ogy, which has been in use since 1959. While shapes and materials have ad- vanced and changed over the years, the basic engineering remains the same. The design incorporates a gate stack, channel region, source electrode, and a drain electrode. The structure is de- signed to transport positive (p-type) or negative (n-type) charges. Together, they produce an integrated circuit (IC) needed for the complementary metal– oxide–semiconductor (CMOS) tech- nology that powers computers and mobile phones.

Today’s designs place the gate stack directly above the channel area. The metal gate stack sits atop a dielectric material that conducts an electric field into the transistor channel region to accumulate or block charges that could flow through. In basic terms, this allows current to flow across the tran- sistor and switch on and off as needed. The problem is that as these structures become smaller, it becomes more diffi- cult to block the charge leak across the transistor. The resulting leakage leads to hotter, less power-efficient micro- chips. Engineers have approached this problem by making the channel region thinner and thinner.

Fin Field Effect Transistor (FinFET) technology is used in virtually all of to- day’s processors. It incorporates stacked sheets and a channel region that is tilted upward (think of it as a wall) to create a wider path for current. The gate and

dielectric are placed over the fin so that it is surrounded on three sides instead of just one; this helps reduce current leakage. These three-dimensional (3D) designs, used by major semiconductor manufacturers, have shrunk from about 22 nm in 2011 to between 7 nm and 5 nm today. Unfortunately, they cannot be built at the 3-nm scale and accommo- date current switching methods. “The leakage and power drain are simply too much for the technology to be viable at this scale,” says Dan Hutcheson, CEO of VLSI Research, Inc., a market research and consulting firm.

For years, researchers and engi- neers have known they were approach- ing the end of the road for current transistor designs. Although myriad tweaks, advances, and trade-offs have led to ongoing advances in central processing units (CPUs), graphics pro- cessing units (GPUs), and other chips, the need for radically different designs was completely apparent. Nanosheets extend performance by removing ma- terial between layers of other material and filling in the gaps with both metal and dielectric.

This leads to a smaller-scale design. What is more, “The gate is wrapping around all four sides of the silicon and the silicon channel thickness scaling is controlled by epitaxial growth, which moves things beyond nanometer con- trol and into atomic level control,” Khare explains.

Beyond Silicon At the heart of nanosheet transistors are new materials and radical design changes. Gary Patton, CTO and head of

worldwide research and development at GlobalFoundries, has described them as “a smaller, faster, and more cost-efficient generation of semicon- ductors.” The technology, which IBM began researching in 2006 and which took shape under a public-private in- dustry alliance, essentially creates a device architecture with stacked layers of silicon sheets by retaining the sili- con layers from a superlattice structure that consists of alternating crystal lay- ers of silicon and silicon germanium.

The significance of these new mate- rials and designs, such as germanium, should not be minimized. Chipmak- ers have been forced to reduce clock speeds because of the enormous heat produced by high transistor density. However, by incorporating new materi- als and designs, it is possible to replace several slower processor cores with a single chip that operates as fast while generating less heat. In some cases, electrons can move more than 10 times faster in these semiconductor designs.

Nanosheet technology represents a remarkable advance in transistors. “These nanosheet layers are patterned lithographically to form gates that wrap around the junction between the source and drain by etching away unwanted material. This is done multiple times to form structures that look something like the center of a layer cake cut in thirds,” Hutcheson explains.

It is possible to place upward of 30 billion switches on a fingernail-sized chip. The gate surrounds the channel region in its entirety to deliver greater control than FinFET. This “stacked” structure supports far more advanced semiconductor fabrication processes. “When the industry figured out how to use certain chemistries to lay down substances at a single molecular level and then place others on top of it, the manufacturing process advanced radi- cally,” Hutcheson says. “They were no longer painting on a thick surface. They could control the deposited mate- rial to a single atomic layer.”

The Endura Clover system from Applied Materials, for example, can apply up to 30 layers within a single stack only a few angstroms thick. This ensures an extremely high level of pro- duction quality.

To be sure, “Nanosheet transistors are far more than a technology itera-

“Nanosheet transistors are creating a new ecosystem for device structure, modeling, process technology, and various materials.”





will introduce new and more advanced capabilities, particularly in the artifi- cial intelligence arena, where advances in computing power can fuel exponen- tial gains. Says Hutcheson, “We will have transistors that can handle heavy- duty artificial intelligence. The impact will ripple out to datacenters, smart- phones, self-driving cars, and many other areas.”

Designs on the Future Nanosheet transistors will shape the semiconductor industry for years to come. The technology takes aim at a fundamental problem, Khare says. “In- tegrated circuits (IC) have been stuck at the same power density for about a decade. It’s been impossible to re- move more than about 100 watts per square centimeter.” Chip designers have focused on keeping heat buildup down, including limiting clock speed to 4 gigahertz or less and using slower multi-core designs that substitute a more-powerful single processor, but generate much less heat.

Nanosheets can break through this barrier with a more efficient transis- tor design combined with new mate- rial, like germanium. It could push the ranges further for power and energy consumption. This addresses a major problem in semiconductors: “As feature sizes shrink, conventional methods of manufacture fail to produce devices that work well electrically,” Hutcheson says. “With the tri-gate structure used on current devices, they can fail to switch on or off, because there is not enough surface area contacted by the gate, hence the need to wrap all four sides. There can also be power dissi- pation problems due to leakage. The reason why new materials are needed is that silicon can’t be deposited over a dielectric in a properly oriented crystal- line form to form the junctions.”

The nanosheet technology also opens up new possibilities and op- portunities within the semiconductor field, especially when combined with new materials. These design improve- ments also create more favorable eco- nomics for manufacturing because today’s technology is too expensive to produce with some of these materials, Hutcheson says.

In order to bump up clock speeds, Ye and others say it is necessary to

produce more powerful and energy- efficient transistors than silicon alone can deliver. Consequently, he and oth- ers continue to research different ma- terials and designs that can be used in the channel region. This includes germanium, as well as semiconduc- tors built from indium gallium arse- nide (InGaAs). Other researchers are exploring how combinations of ger- manium, indium arsenide, and galli- um antimonide can offer even greater efficiencies in nanosheets and other semiconductor designs.

Researchers have found that elec- trons can move up to 10 times faster within these more-advanced semicon- ductors. The end result is chips that not only switch faster, but also oper- ate at much lower voltage levels—thus enabling new types of functionality and features. These designs likely will introduce capabilities that we can’t imagine today. For now, chipmakers are sold on the concept. Most have al- ready committed to using nanosheet transistors in their future designs.

Concludes Ye: “The combination of nanosheet transistors and advances in semiconductors will carry us far into the future. The technology will have a significant impact on computing.”

Further Reading

Ye, P., Ernst, T., and Khare, M.V. The last silicon transistor: Nanosheet devices could be the final evolutionary step for Moore’s Law, IEEE Spectrum, Volume 56, Issue 8, August 2019. https://ieeexplore.ieee.org/abstract/ document/8784120

Moayed, M.M.R., Bielewicz, T., Noei, H., Stierle, A., and Klinke, C. High-Performance n- and p-Type Field- Effect Transistors Based on Hybridly Surface-Passivated Colloidal PbS Nanosheets. Advanced Functional Materials. Volume 28, Issue 19, May 9, 2018. https://onlinelibrary.wiley.com/doi/ abs/10.1002/adfm.201706815

Dahiya, A.S., Sporea R.A., Poulin-Vittrant, G., and Alquier, D. Stability evaluation of ZnO nanosheet based source-gated transistors, Nature, Scientific Reports, Article number 2979, February 27, 2019. https://www.nature.com/articles/s41598- 019-39833-8

Samuel Greengard is an author and journalist based in West Linn, OR, USA.

© 2020 ACM 0001-0782/20/3 $15.00

tion. It is extending Moore’s Law for several more years. In fact, the design framework surrounding nanosheet transistors will allow researchers and engineers to develop even more ad- vanced transistors and standard cells than FinFET technology allows, in- cluding flexibility in circuit design,” Khare says. “The industry is converg- ing around this device structure and it is moving forward with fabs and production. Nanosheet transistors are creating a new ecosystem for device structure, modeling, process technol- ogy, and various materials.”

Of course, the transition will not happen overnight. The technology will require entirely new fabs and changes in distribution channels. “The cost of a new fab is in the $20-billion range, so it isn’t something to take casually. There’s an enormous amount of mon- ey and planning that must go into their transition,” says Purdue’s Ye.

While the first nanosheet transis- tors likely will appear from Samsung some time this year, it may take several more years before production scales up to support widespread adoption. Only Intel, Samsung, and Taiwan Semi- conductor Manufacturing Co. (TSMC) have the means to handle this level of miniaturization. It is far more com- plicated than updating existing fabs. Chipmakers must build entirely new fabs with equipment and systems to handle the specialized nanosheet con- struction, at a cost that can reach $20 billion, Ye says.

Ultimately, the question is not wheth- er nanosheet technology will impact the market, but rather when and how. “There’s no way to know when we will hit the crossover point and nanosheet transistors will become the dominant technology,” Khare says. “There are a lot of technical and economic issues that intersect with it. What’s clear is that we will see products emerging within a couple of years and they will impact many aspects of computing, from devices and datacenters to the edge of the network.”

Make no mistake, nanosheet tran- sistors will lead to more powerful de- vices that utilize power far more effi- ciently—a key consideration in an era where battery life matters, energy costs are exorbitant, and climate change concerns are growing. The technology




news P














around a vertical axis, which deflects their wakes. Although a misalignment slightly reduces the power output of the upwind turbine, for some wind di- rections this can be more than offset by increased power from downwind turbines (which is proportional to the cube of the windspeed).

Still, the number of combinations of possible yaw angles for each tur- bine, as well as windspeeds and direc- tions, quickly becomes computation- ally challenging. For this reason, the U.S. National Renewable Energy Labs (NREL) has developed both a calcula- tion-intensive “large-eddy” computa- tional-fluid-dynamics model, called Simulator fOr Wind Farm Applications (SOWFA), as well as a simpler tool for steady-state calculations called FLOw Redirection and Induction in Steady State (FLORIS).

Paul Fleming, who developed these tools with colleagues at NREL and the Delft University of Technology in The Netherlands, noted that although there is still debate about how to deal with rapidly shifting wind directions, there “seems to be some convergence toward steady-state modeling.” Wind farm operators currently prefer to set a yaw angle and hold it for a while, “strik- ing some balance between trying to keep up with the changing wind direc- tion and trying to yaw as little as pos- sible,” he said. “Wake steering has to be built on the same structure.”

A wind farm’s electricity generation over the course of a year, known as an- nual energy production or AEP, is likely to see only small fractional increases from wake steering, in part because many wind directions would not create substantial losses in any case. For exist- ing facilities, Fleming said, “a reason- able guess for total AEP gain is some- where between 1% or 2%.” The gains could be especially compelling for off-

W I N D – G E N E R AT E D E L E C –

T R I C I T Y H A S expanded greatly over the past decade. In the U.S., for example, by 2018

wind was generating 6.6% of utility- scale electricity generation, according to the U.S. Energy Information Ad- ministration. The criteria for efficient design and reliable operation of the familiar horizontal-axis wind turbines have been well established through decades of experience, leading to ever- larger structures over time, both to in- tercept more wind and to reach faster winds higher up.

As these gargantuan turbines are as- sembled into large wind farms, often spread over uneven terrain, complex aerodynamic interactions between them have become increasingly impor- tant. To address this issue, researchers have proposed protocols that slightly reorient individual turbines to improve the output of others downwind, and they are working with wind farm op- erators to assess their real-life perfor- mance. Beyond extracting more power from current farms, widespread use of these “wake-steering” techniques could allow denser wind farm designs in the future.

Bigger Is Better “The tendency is to build higher and higher turbines,” said Mireille Bossy, a fluid dynamics expert at Inria, the French national institute for comput- er science and applied mathematics, located in the Sophia Antipolis tech- nology park near Nice, France. “We are talking in a new project about 300m [about 984 feet] in height.” The wake of slower disturbed air typically extends 10 or more times the diameter iof a tur- bine, robbing downwind turbines of wind. Completely avoiding power loss for downwind turbines would demand

several kilometers between turbines, incurring substantial additional costs in real estate and wiring.

These costs and the specific con- straints of available sites often lead to less-than-optimal arrangements, however. Predicting the interactions is difficult, especially for farms located in terrain that may create turbulent flows. Bossy described one existing farm where the addition of a single tur- bine, in what seemed like a good place to minimize wake interactions, would decrease the output of the entire facil- ity. “It’s complicated,” she stressed. “We cannot just do some wind-tunnel simulation,” she stressed. “We need to simulate the site.”

Wake Steering Things get complicated quickly, be- cause each combination of wind di- rection and speed, as well as other atmospheric details, requires a new simulation. Fortunately, the wake in- teractions can be reduced by “yawing” the turbines: rotating them slightly

Algorithms to Harvest the Wind Wake steering can help ever-larger turbines work together more efficiently on wind farms.

Technology | DOI:10.1145/3379497 Don Monroe





shore generation, where winds tend to be steadier, turbines larger, and wakes more persistent, but land-based sites can benefit as well.

This improvement may seem mod- est, but it could amount to millions of dollars of revenue for very little cost. “It’s garnered a lot of interest” from the industry, Fleming said. Indeed, at a September 2019 meeting attend- ed by major wind developers, he said, “There was pretty broad agreement that something like this will be adopted more widely.”

Out in the Field To get this kind of buy-in, “field tests are critical,” Fleming said. For this reason, he and his colleagues have worked with manufacturers, includ- ing NextEra, which he said is “the largest owner of turbines in the U.S.,” to conduct field trials that have vali- dated the simulation predictions. For one unusually close pair of turbines, spaced approximately three times the diameter of the turbines apart, the power from the downwind turbine for the worst wind direction was in- creased by about 14% when the up- wind turbine was yawed to deflect the wake. This deflection produced in an overall 4% increase for the pair.

“Right now, the algorithms we’re implementing aren’t very complicated; they’re essentially a lookup table” of yaw offsets for a particular windspeed and direction, Fleming said. Over time, as the technique proves its value, he ex- pects these algorithms can be refined.

John Dabiri, now at the California Institute of Technology, recently ex- plored one such refinement with col- leagues, and followed it up with field experiments. “What we were aiming for was to do site-specific optimiza- tion: for a given layout, a given terrain, a given location where the wind con- ditions are what they are, and to be able to incorporate historical data in a way that informs a physics model.”

Other researchers have used such historical data, capturing how much energy each turbine generated under various conditions with no wake steer- ing, to train machine learning models. “The challenge is that we don’t typi- cally have enough data,” Dabiri said, so models can overfit the existing data but fail to generalize to different loca-

tions. He and his team combine the data with a simplified physics model to match each site. The model is ef- ficient enough to optimize the entire set of yaw angles “on a laptop comput- er in a few seconds.”

Dabiri’s team, then at Stanford University, worked with wind farm operator TransAlta to test their opti- mization algorithm on a line of six tur- bines in Alberta, Canada. “That mid- dle ground, between the two-turbine studies and a full wind farm, is impor- tant for us to investigate,” he said, to give operators confidence about real- world operation.

“Academic research has largely been focused on numerical simula- tions, some wind-tunnel studies, and then, even in the field, it’s typically maybe a pairwise study,” Dabiri said. “We’re finding there’s still a pretty big leap from standard methods of investigation and what happens in a real wind farm.” One concern is “sec- ondary steering,” in which a deflect- ed wake is further modified by inter- actions with the downwind turbine, which is not important for just one pair of turbines.

As the researchers hoped, their al- gorithm increased electric output by almost 50% for slow winds directed along the line. Wake steering also significantly reduced fluctuations in power generation due to turbulence, another important consideration. However, these wind conditions are rare at this test site, so the improve- ment is expected to be much smaller when averaged over a year.

In evaluating long-term adoption of wake steering, operators also will need to know how it affects reliabil- ity. “Over a 10-year period of operating the turbines in this mode, what could the long-term impacts be on the blade health, et cetera?” he asked. “Those are important questions to consider.”

Design for Steering Although the results from existing farms are promising, “the bigger im- pact is in how we design future wind farms,” Dabiri said. To date, “most wind farms are designed conservatively, such that the turbines are spaced far apart from one another,” which is one reason the increases are modest.

Fleming agreed that as operators

become comfortable they can mitigate wake losses, it could open “opportuni- ties for densification of wind farms,” perhaps significantly. More specula- tively, there may even be ways to har- ness the wake interactions. “When we first modeled wake steering, it was more or less as a horizontal displace- ment of the wake,” and the goal was to “navigate these wakes into the gaps between other turbines” Fleming said. “But when you look at the three-dimen- sional flow out of CFD (computational fluid dynamics), there’s an additive ef- fect to wake steering because of the gen- eration of counterrotating vortices that persist through the flow.” These vortices could suck down faster, higher-altitude winds, which he described as “different from just avoiding wake losses.”

Dabiri suspects these interactions could be even more important with vertical-axis turbines, although so far such designs are less mature and reli- able. “Vertical-axis turbines individu- ally tend to be less efficient,” Dabiri ac- knowledged, but “they perform better when they are in close proximity. We see possibilities of 10X improvement, as opposed to 10% improvement.”

Even without such dramatic en- hancements, however, the com- bination of real-time yaw-control algorithms for wake steering and simu- lations to improve the collective output of entire farms look to help drive the continued growth of wind farms and their implementation at high densities in previously inhospitable terrain.

Further Reading

Howland, M.F., Lele, S.K., and Dabiri, J.O. Wind farm power optimization through wake steering, Proc. Nat. Acad. Sci. 116, 14495 (2019), http://bit.ly/36FvZx2

Fleming, P., et al Initial results from a field campaign of wake steering applied at a commercial wind farm – Part 1, Wind Energ. Sci. 4, 273 (2019), http://bit.ly/32jZz7J

Renewable & Alternative Energy, U.S. Energy Information Administration, https:// www.eia.gov/renewable/data.php#wind

Wind Energy Research, U.S. National Renewable Energy Laboratory, https://www. nrel.gov/wind/

Don Monroe is a science and technology writer based in Boston, MA, USA.

© 2020 ACM 0001-0782/20/3 $15.00




news I













, A




Y /E









phrases in one language to phrases in the second, which allows for accurate and fast translation.

In practice, this means devices can translate between languages more quickly than ever before by using such modeling. Incorporating high- powered processors, quality micro- phones, and speakers into the device, a person can carry on a real-time, two- way conversation with someone who speaks an entirely different language. These devices represent a significant increase in accuracy and functionality above manual, text-based translation applications such as Google Translate.

The advances in technology have not gone unnoticed, as the market for language translation devices is projected to reach $191 million an- nually by 2024, up from slightly more than $90 million annually in 2018, according to data from Research & Markets. Much of the activity is due to

T HE GREATEST OBSTACLE to inter- national understanding is the barrier of language,” wrote British scholar and author Christopher Dawson

in November 1957, believing that rely- ing on live, human translators to ac- curately capture and reflect a speak- er’s meaning, inflection, and emotion was too great of a challenge to over- come. More than 60 years later, Daw- son’s theory may finally be proven outdated, thanks to the development of powerful, portable real-time trans- lation devices.

The convergence of natural language processing technology, machine learn- ing algorithms, and powerful portable chipsets has led to the development of new devices and applications that al- low real-time, two-way translation of speech and text. Language translation devices are capable of listening to an audio source in one language, translat- ing what is being said into another lan- guage, and then translating a response back into the original language.

About the size of a small smart- phone, most standalone translation de- vices are equipped with a microphone (or an array of microphones) to capture speakers’ voices, a speaker or set of speakers to allow the device to “speak” a translation, and a screen to display text translations. Typically, audio data is captured by the microphones, pro- cessed using a natural language pro- cessing engine mated to an online lan- guage database located either in the cloud or on the device itself, and then the translation is output to the speakers or the screen. Standalone devices, with their dedicated translation engines and small portable form factors, are gener- ally viewed as being more powerful and convenient than accessing a smart- phone translation application. Further, many of these devices offer the ability

to access translation databases stored locally on the device or access them in the cloud, allowing their use in areas with limited wireless connectivity.

Instead of trying to translate speech using complex rules based on syntax, grammar, and semantics, these lan- guage processing algorithms employ machine learning and statistical mod- eling. These initial models are trained on huge databases of parallel texts, or documents that are translated into several different languages, such as speeches to the United Nations, fa- mous works of literature, or even mul- tinational marketing and sales materi- als. The algorithms identify matching phrases across sources and measure how often and where words occur in a given phrase in both languages, which allows translators to account for differ- ences in syntax and structure across languages. This data is then used to construct statistical models that link

Across the Language Barrier Translation devices are getting better at making speech and text understandable in different languages.

Society | DOI:10.1145/3379495 Keith Kirkpatrick





the growth in international travel and tourism, particularly from residents of countries where English language pro- ficiency is relatively low.

For example, countries such as Japan, China, and Brazil feature a strong middle class with the means to travel internationally. Yet, these countries each are ranked “low” on the 2018 Education First English Pro- ficiency Index (EPI), reflecting the challenges many travelers have when leaving their home country.

The ideal solution is for citizens to learn to speak multiple languages, ac- cording to Howie Berman, executive director of The American Council on the Teaching of Foreign Languages. “Our position has always been that technology is a complementary piece to the language learning process,” Berman says. “I think language really depends a lot, it’s not just on what you say, but how you say it. And, I think translation devices really do fail to pick up on a lot of the cultural cues.”

However, the casual traveler may not have the time or inclination to become proficient in a new language in prepara- tion for a tourist trip or event, like the 2020 Olympic Games in Japan, or the 2020 FIFA World Cup scheduled to be held in Qatar. For these one-off trips, Berman says, “We certainly don’t ex- pect someone going to the Olympics to enroll in multiple classes right before they go; we realize that’s not feasible for everyone.” Regarding modern trans- lation devices, Berman says, “We think they’re valuable tools, but we see them for what they are, as complementary tools to the classroom experience.”

Still, the use of machine learning will help translators become better at un- derstanding nuance, regional dialects, and tone. As algorithms are trained on voice data containing these characteris- tics of everyday speech, the accuracy and intelligence of the models will improve over time, particularly with translations between languages that do not feature similar structures or character sets.

One device that addresses these concerns is Pocketalk, a standalone translation device developed and mar- keted by Japanese software company Sourcenext Corp., which the company says can translate between 74 languag- es. Pocketalk has shipped globally more than 600,000 units of the $230

device since its debut in 2017, captur- ing nearly 96% of the global translation device market, according to April 2019 data from analyst firm BCN Retail.

“Pocketalk was created to connect cultures and create experiences for peo- ple that do not speak the same language, and can and should be used for both business and leisure,” says Joe Miller, general manager and product lead for Pocketalk. Miller says Pocketalk’s translation engines can recognize local dialects, dialect nuances, slang, and ac- cents. “The voice translation will use an accent when speaking back the transla- tion, not a robotic voice,” Miller says.

However, like other devices designed to support live, multiple-way conversa- tions, Pocketalk relies on a connection to the Internet to access its online lan- guage database and translation engine. Devices that feature a limited number of languages often can store these databas- es on the device, but devices that support dozens of languages generally require a persistent connection to a cloud data- base. While Pocketalk works on 4G cellu- lar connections, devices such as Birgus’ Two Way Language Translator or the ODDO AI pocket translator require the use of a Wi-Fi connection, and will not work using only a cellular connection.

Devices that require a Wi-Fi connec- tion may not be suitable for travelers who spend a lot of time interacting with people outside of formal indoor set- tings, as they may not be able to access a reliable Wi-Fi signal. That drawback is less of an issue for translating devices designed for the international business user community, who utilize translation devices to conduct real-time business meetings and seminars that require two or more languages to be translated.

“Through our research we found that there was a need for a translator that is optimal for professional uses and can support multiple people eas- ily conversing at the same time,” says Andrew Ochoa, founder and CEO of Waverly Labs, creator of the Ambas- sador, a small over-the-ear translation device that can support up to 20 lan- guages and 42 dialects, but which re- quires the use of a companion IoS or Android mobile application paired to a smartphone to function. “Whether someone is participating in one-on- one conversation, a multi-person meeting, or larger conference setting, Ambassador allows them to easily lis- ten and communicate with their col- leagues and teams.”

The Ambassador incorporates a series of microphones, and combines the input with speech recognition neural networks, in order to capture speech clearly. The system also utiliz- es cloud-based machine translation engines built on translation models that incorporate local accents and dialects, allowing Waverly Labs to use machine learning to tune the accura- cy of their devices based on regional parameters.

When traveling, not all communica- tion is verbal. Fujitsu also offers a por- table standalone translation device similar to Pocketalk, called Arrows Hello, which also includes a camera that can capture images, such as signs and menus that include foreign char- acters, and then display the transla- tions of those text-based materials on its screen. Similarly, optical character recognition (OCR) technology compa- ny ABBYY offers a consumer-focused mobile app called TextGrabber that can “read” text or QR codes in more than 60 languages, then translate the words or phrases to a different target language while retaining the appropri- ate syntax and meaning, according to Bruce Orcutt, the company’s vice presi- dent of product marketing.

“ABBYY’s an OCR company, so you can imagine our bias towards convert- ing everything text that’s possible,” Orcutt says. The TextGrabber app, he says, “uses multiple technologies that have evolved and developed to ultimately identify text, and then we use our OCR technology once we have identified the text.” TextGrabber em-

The use of machine learning will help translators become better at understanding nuance, regional dialects, and tone.




news I













ploys machine learning algorithms to identify text within an image, applies OCR to capture that text, then applies a logic engine to clean up syntax and character misreads, such as being able to discern whether a character is a zero or the letter “O,” based on context.

While TextGrabber currently does not include any functionality for cap- turing voice or video to aid in real-time translation, its OCR translation tech- nology is incorporated into solutions from Microtek, Panasonic, Ricoh, Sharp, and others. Orcutt believes that in the future, devices that can handle any type of media, including audio, moving video, images, and text, will be- come commonplace.

“If you look at the younger genera- tions, [those] digital first generations, they have no problem navigating these tools, as they’re part of their ecosys- tem,” Orcutt says. “And I think with the 2020 Olympics coming up in Japan, there’ll be a tremendous amount of innovation in this area to help. I know the Japanese government is interested in making the Japanese market more easily navigated by tourists to make the Olympic experience better.”

Clearly, technology developments in machine learning have led to devic- es that can provide accurate, real-time translations for people attending large, multinational-focused events such as the Olympics. Berman, however, hopes

these technical achievements may spur people to take the next step and actually try to learn another language to fully understand its nuances, via a combination of technology and tradi- tional classroom instruction.

“I think it’s wonderful that these devices and these tools are elevat- ing the status of language,” Berman says. “We think [translation devices] are valuable tools, but we see them as complementary tools to the class- room [learning] experience.”

Further Reading

Brown, Peter F. et al. “A statistical approach to language translation.” COLING (1988). https://www. semanticscholar.org/paper/A-statistical- approach-to-language-translation-Brown- Cocke/2166fa493a8c6e40f7f8562d15712d d3c75f03df

Wenniger, Gideon Maillette de Buy. “Aligning the foundations of hierarchical statistical machine translation.” (2016). https://www.semanticscholar. org/paper/Aligning-the-foundations- of-hierarchical-machine-Wenniger/ de12e7ecf32523ac9b480d 3dab052ec5b43ebef9

What Buyers need to know about speech translation devices: https://www.youtube. com/watch?v=LUvNcp2xQqM

Keith Kirkpatrick is principal of 4K Research & Consulting, LLC, based in Lynbrook, NY, USA.

© 2020 ACM 0001-0782/20/3 $15.00


Amy Bruckman, a professor and senior associate chair in the School of Interactive Computing at

the Georgia Institute of Technology, says her interest in computer science began when she took computer science classes in high school.

Bruckman went on to earn her undergraduate degree in physics from Harvard University, her master of science degree from the Massachusetts Institute of Technology (MIT) Media Lab Interactive Cinema Group, and her Ph.D. from the MIT Media Lab’s Epistemology and Learning Group.

After receiving her doctorate, Bruckman joined the faculty at the Georgia Institute of Technology, where she has remained ever since.

Her research interests include social computing, collaboration, social movements, content moderation, conspiratorial ideation, and Internet research ethics.

“I am also writing a book,” Bruckman continues. “It is called Should You Believe Wikipedia? Understanding Knowledge and Community on the Internet,” and “is based on a lot of the things I teach in my Design of Online Communities class, which I have been teaching since 1997. I am hoping to share some of what I have learned in teaching the class in the book.”

She anticipates the book will be published next year, by Cambridge University Press.

Bruckman says she has “one fun dream,” which is to facilitate communication between people who hold radically different political views, so they can come to understand one another better. “I don’t know how to do that, but maybe I will have invented some kind of Internet game or discussion forum where that can happen.”

—John Delaney




V viewpoints

that represent more than the White middle-class status quo, cultur- ally responsive computing seeks to “translate” Indigenous knowledges, vernacular practices, civic engage- ment, hacking, and culturally situ- ated forms of entrepreneurship into CS education.2

As a culturally responsive comput- ing application, Cornrow Curves has the anti-racist benefit of highlighting non-European mathematics, which is important for young people of all racial and ethnic backgrounds in de- mographically homogeneous or het- erogeneous classrooms. When it is implemented in a school that serves African American and Black commu- nities (for example, at Brenda’s school over 50% of students identify as African American or Black) it has the added benefit of helping to bro- ker school-community relationships. This gives local cultural experts op- portunities to shape classroom curri- cula, which can be especially impor- tant for White teachers who are not

I T WA S A cold morning in late February when Angela (pseud- onym), an African American cosmetologist, arrived with a hair mannequin at a middle

school in the city where her salon is located. Angela went to the main of- fice and signed the guestbook before making her way to Brenda’s class- room. Brenda (pseudonym) is a White technology teacher who has been an educator in the city for more than a decade. She has a strong passion for exposing students to educational technologies, especially those that support engineering and computer science (CS) lessons. This particular morning, she was prepared to imple- ment a two-day programming lesson she developed with Angela and two university researchers.

The lesson used a visual program- ming application called Cornrow Curves (see Figure 1) that had been created by the Culturally Situated De- sign Tools research team (see https:// csdt.org). Cornrow Curves helps

teach block-based programming and transformational geometry by having young people explore an original body of African mathematical knowledge through the history and design of corn- row braids.1 This grounds it in cultur- ally responsive computing, an area of research and practice that, in part, is intended to confront racial and ethnic underrepresentation in CS. Culturally responsive computing challenges the idea that students’ families, interests, heritages, and community contexts are barriers to learning. Alternatively, students’ identities are foundational for a quality education.

For culturally responsive comput- ing researchers and practitioners, programming for programming’s sake is part of the problem of un- derrepresentation, as it reinforces the idea of culture-free instruction. This assumption allows for the re- production of the dominant culture in the classroom, which in the U.S. tends to reflect White middle-class values. To create education contexts

Education Computing and Community in Formal Education Culturally responsive computing repurposes computer science education by making it meaningful to not only students, but also to their families and communities.

• Mark Guzdial, Column Editor

DOI:10.1145/3379918 Michael Lachney and Aman Yadav



























/C S


.O R


One way to make deep connections is to foster teachers’ relationships with folks from outside the tradi- tional school system who can provide insight into the larger context of stu- dents’ lives, histories, and knowl- edge, as seen in the collaboration between Angela and Brenda. In an- other instance, Sandoval4 described a CS teacher of European descent who began to develop culturally respon- sive competencies by working with self-identified Indigenous Xican@s, attending a food justice symposium, and helping students connect class- room content to real-world places, such as community gardens.

Therefore, focusing attention on culturally responsive computing in formal CS education provides oppor- tunities to highlight the importance of out-of-school assets for broad- ening participation and, potential- ly, strengthening the relationship schools have with the communities they serve. Indeed, in the second au- thor’s CT4EDU (see http://ct4edu.

from and do not live in the communi- ties they serve.

Angela, Brenda, and the two uni- versity researchers delivered the Cornrow Curves lesson together, re- inforcing the math content across virtual and physical braiding activi- ties. Students moved back and forth between learning to physically braid with Angela (see Figure 2) and pro- gram braids on their computers. Each day the classroom was busy with chil- dren engaged in culturally and com- putationally rich activities. Reflect- ing on the lesson, Brenda explained the importance of collaborating with Angela, “I liked when she interacted with the kids, and she had some of my more difficult young ladies come over and actually be interested in doing Cornrow Curves, where on an- other occasion they might sit and not participate.” What can this vignette tell us about the role of local cultural experts like Angela in broadening the participation of underrepresented communities in computing?

Culturally Responsive Computing in Formal Computer Science Education While culturally responsive computing in out-of-school or after-school settings provides important insight into the strengths of cultural content for sup- porting CS education, there has been less attention to its role in formal class- rooms. One possible reason for this is the fact that some aspects of culturally responsive computing cannot be eas- ily implemented as a set of predeter- mined steps. As the name suggests, it aims to be responsive to locally situ- ated contexts. Of course, culturally re- sponsive computing can and should include pre-made curricula and tools. For example, the Exploring Computer Science curriculum (see http://explor- ingcs.org) includes culturally situated design tools and has rich opportu- nities for context-specific problem solving. However, if pre-packaging equates to standardization then there is a risk of shallowly representing computing-culture connections.