Originally Posted On: singularityhub.com

You’re driving along the highway when, suddenly, a person darts out across the busy road. There’s speeding traffic all around you, and you have a split second to make the decision: do you swerve to avoid the person and risk causing an accident?

Do you carry on and hope to miss them? Do you brake? How does your calculus change if, for example, there’s a baby strapped in the back seat?

In many ways, this is the classic “moral dilemma,” often called the trolley problem. It has a million perplexing variants, designed to expose human bias, but they all share the basics in common. You’re in a situation with life-or-death stakes, and no easy options, where the decision you make effectively prioritizes who lives and who dies.

A new paper from MIT published last week in Nature attempts to come up with a working solution to the trolley problem, crowdsourcing it from millions of volunteers. The experiment, launched in 2014, defied all expectations, receiving over 40 million responses from 233 countries, making it one of the largest moral surveys ever conducted.

A human might not consciously make these decisions. It’s hard to weigh up relevant ethical systems as your car veers off the road. But, in our world, decisions are increasingly made by algorithms, and computers just might be able to react faster than we can.

Hypothetical situations with self-driving cars are not the only moral decisions algorithms will have to make. Healthcare algorithms will choose who gets which treatment with limited resources. Automated drones will choose how much “collateral damage” to accept in military strikes.

Not All Morals Are Created Equal

Yet “solutions” to trolley problems are as varied as the problems themselves. How can machines make moral decisions when problems of morality are not universally agreed upon, and may have no solution? Who gets to choose right and wrong for the algorithm?

The crowd-sourcing approach adopted by the Moral Machine researchers is a pragmatic one. After all, for the public to accept self-driving cars, they must accept the moral framework behind their decisions. It’s no good if the ethicists or lawyers agree on a solution that’s unacceptable or inexplicable to ordinary drivers.

The results have the intriguing implication that moral priorities (and hence the types of algorithmic decisions that might be acceptable to people) vary depending on where you are in the world.

The researchers first acknowledge that it’s impossible to know the frequency or character of these situations in real life. Those involved in accidents often can’t tell us exactly what happened, and the range of possible situations defies easy classification. So, to make the problem tractable, they break it down into simplified scenarios, looking for universal moral rules.

As you take the survey, you’re presented with thirteen questions that ask for a simple yes or no choice, trying to narrow down responses to nine factors.

Should the car swerve into the other lane, or should it keep going? Should you preserve the young people versus the old people? Women over men? Pets over humans? Should you try to spare the most lives possible, or is one baby “worth” two elderly people? Spare the passengers in the car versus the pedestrians? Those who are crossing the road legally versus illegally? Should you spare people who are more physically fit? What about those with higher social status, like doctors or businessmen?

In this harsh, hypothetical world, somebody’s got to die, and you’ll find yourself answering each of these questions—with varying degrees of enthusiasm. Yet making these decisions exposes deeply-ingrained cultural norms and biases.

Crunching through the vast dataset the researchers obtained as a result  of the survey yields universal rules as well as fascinating exceptions. The three most dominant factors, averaged across the entire population, were that everyone preferred to spare more lives than fewer, humans over pets, and the young over the elderly.

Regional Differences

You might agree with these broad strokes, but looking further yields some pretty disturbing moral conclusions. More respondents chose to save a criminal than a cat, but fractionally preferred to save a dog over a criminal. As a global average, being old is judged more harshly than being homeless—yet homeless people were spared less often than the obese.

These rules didn’t apply universally: respondents from France, the United Kingdom, and the US had the greatest preference for youth, while respondents from China and Taiwan were more willing to spare the elderly. Respondents from Japan displayed a strong preference for saving pedestrians over passengers in the car, while respondents from China tended to choose to save passengers over pedestrians.

The researchers found that they could cluster responses by country into three groups: “Western,” predominantly North America and Europe, where they argued morality was predominantly influenced by Christianity; “Eastern,” consisting of Japan, Taiwan, and Middle Eastern countries influenced by Confucianism and Islam, respectively; and “Southern” countries including Central and South America, alongside those with a strong French cultural influence. In the Southern cluster there were stronger preferences for sparing women and the fit than anywhere else. In the Eastern cluster, the bias towards saving young people was least powerful.

Filtering by the various attributes of the respondent yields endless interesting tidbits. “Very religious” respondents are fractionally more likely to save humans over animals, but both religious and irreligious respondents display roughly equal preference for saving those of high social status vs. those of low social status, even though (one might argue) it contradicts some religious doctrines. Both men and women prefer to save women, on average—but men are ever-so-slightly less inclined to do so.

Questions With No Answer

No one is arguing that this study somehow “resolves” these weighty moral questions. The authors of the study note that crowdsourcing the data online introduces a sample bias. The respondents skewed young, skewed male, and skewed well-educated; in other words, they looked like the kind of people who might spend 20 minutes online filling out a survey about morality for self-driving cars from MIT.

Even with a vast sample size, the number of questions the researchers posed were limited. Getting nine different variables into the mix was hard enough—it required making the decisions simple and clear-cut. What happens if, as you might expect in reality, the risks were different depending on the decision you took? What if the algorithm were able to calculate, for example, that you had only a 50 percent chance of killing pedestrians given the speed you’re going?

Edmond Awad, one of the authors of the study, expressed caution about over-interpreting the results. “It seems concerning that people found it okay to a significant degree to spare higher status over lower status,” he told MIT Technology Review. “It’s important to say, ‘Hey, we could quantify that’ instead of saying, ‘Oh, maybe we should use that. The discussion should move to risk analysis—about who is at more risk or less risk—instead of saying who’s going to die or not, and also about how bias is happening.”

Perhaps the most important result of the study is the discussion it has generated. As algorithms start to make more and more important decisions, affecting people’s lives, it’s crucial that we have a robust discussion of AI ethics. Designing an “artificial conscience” should be a process with input from everybody. While there may not always be easy answers, it’s surely better to understand, discuss, and attempt to agree on the moral framework for these algorithms, rather than allowing the algorithms to shape the world with no human oversight.

Image Credit: Scharfsinn / Shutterstock.com

About Author: Thomas Hornigold is a physics student at the University of Oxford. When he’s not geeking out about the Universe, he hosts a podcast, Physical Attraction, which explains physics – one chat-up line at a time.

Originally Posted On: techrepublic.com

The explosion of data in consumer and business spaces can place our productivity at risk. There are ways you can resist drowning in data.

The pace of data creation steadily increases as technology becomes more and more ingrained in people’s lives and continues to evolve.

According to Forbes.com last May, “there are 2.5 quintillion bytes of data created each day at our current pace, but that pace is only accelerating with the growth of the Internet of Things (IoT). Over the last two years alone 90 percent of the data in the world was generated.”

While technology should make our lives easier, the information it provides can negatively impact our mental function by overwhelming us with too much input.

However, don’t confuse cognitive overload with work overload. Whereby work overload is simply having too much to do and not enough time to complete it, cognitive overload refers to having too much information to process at once.

SEE: Leadership spotlight: How to make meetings worthwhile (Tech Pro Research)

Fouad ElNaggar, co-founder and CEO of Sapho, an employee experience software provider based in San Bruno, Calif., is passionate about cognitive overload. Together we developed some tips for workers on how to fix the problem.

1. Close/shut off distracting applications

The irony of productivity applications is that they can actually make you less productive. Microsoft Office includes Outlook, an email application, which can “helpfully” notify you when new email arrives.

Sadly, this can also contribute to your information overload if you’re in the middle of a task, and you switch to Outlook to read an email. You might even forget about the current task you’re working on. Instant messaging apps, or frankly, anything that dings or pops up an alert are just as distracting. When trying to stay focused on a task, close or shut off any applications which could serve as potential distractions. Oh, and silence your phone, too.

2. Switch off push notifications

If you can’t close a potentially distracting application because you need it available, you can still quiet it down. Between Slack, Gchat, calendar, email and text messages, it probably seems like those tiny dialog boxes pop up on your screen all day long. Take a few minutes to evaluate which push notifications actually help you get work done, and turn off the rest.

SEE: Project prioritization tool: An automated workbook (Tech Pro Research)

3. Bucket your email correspondence

Constantly checking and responding to email is a major time drain. Set aside two times a day to answer emails, and do not check it any other time. Put your phone on “Do Not Disturb,” and make it a point to not let notifications interrupt you during that time.

4. Stay off personal social media/news sites/other temptations

It’s easy and tempting to check social media, or your favorite news outlet while working, especially if you’re waiting for a task to finish before you proceed (such as rebooting a server or uploading a file). However, this just puts more data into your current memory banks, so to speak, so that instead of thinking about that server patching project now you’re also thinking about the NFL draft or how many people “like” your funny Facebook meme. Save social media for lunch time or after work. It’ll be more meaningful, and you can keep your work and recreation separate, as it should be.

5. Utilize minimalism

I keep a very minimalistic workspace: a family picture, a Rick Grimes (from “The Walking Dead,” which contains many parallels to IT life) keychain figure, and a calendar. No fancy furniture, no posters, no inspiring slogans, and no clutter. This helps me stay oriented to what I need to do without the sensory overload.

I also apply the same principles to my computer: I only keep programs running which I need, and even close unnecessary browser tabs, SSH sessions, and Windows explorer windows so that I’m only concentrating at the task at hand.

SEE: IT jobs 2018: Hiring priorities, growth areas, and strategies to fill open roles (Tech Pro Research)

6. Avoid multitasking

You may not have a choice, but avoiding to multitask is one of the best things you can do to keep your brain from being overwhelmed. Dividing your attention into four or five parallel tasks is a sure-fire way to ensure that those tasks take longer or end up being completed less efficiently than if you accomplished these things one at at time. Worse, it’s all too easy to drop tasks entirely as your attention span shifts, resulting in uncompleted work.

7. Utilize documentation

Document your to-do lists, operational processes, and daily procedures you need to follow (building a new server, for instance) so that you don’t rely on memory and can quickly handle tasks—or better yet—refer them to someone else. Anytime I discover how something works or what I can improve upon I update the related electronic documentation so I don’t have to comb through old emails, leaf through handwritten notes, or worse, ask coworkers or fellow employees to fill in missing details that I should have recorded.

8. Take notes as you go

In addition to relying upon established documentation to make your efforts more productive, take notes during difficult operations such as a server recovery effort or network troubleshooting endeavor. It helps to serve as a “brain dump” of your activities so that you can purge them from memory and refer to this information later, if needed.

Believe me, there’s nothing more challenging then sorting through a complex series of tasks during an outage post-mortem to recall what you did to fix the problem. A written record can save your brain.

SEE: Comparison chart: Enterprise collaboration tools (Tech Pro Research)

9. Take routine breaks

This should be a no-brainer, yet too many people consider themselves too busy to take a break, when doing so allows you to step away from work and hit the “pause” button. It’s not just about relaxing your brain so that you return to work with a more productive mindset, but a quick walk around the building might be beneficial in allowing you to think and come up with new ideas or solutions to problems you’re facing, thereby eliminating one more area of information overload.

10. Avoid open space seating areas

I’ve written about some of the problems of the infamous (and unfortunately common) open-seating plan in companies. In a nutshell, having no privacy and sitting in close physical and audial proximity even to individuals considered close friends strains working relationships and breeds frustration.

Avoiding cognitive overload isn’t just about not taking on or dealing with too much at once, but it’s also about not letting other people’s activities intrude upon your own productivity. Whether it’s an annoying personal phone call, playing music or even just chewing loudly, other people’s nearby activity can be a source of unwanted details, which reduces your capacity to do your job. You may not have a choice about sitting in an assigned open space seat, but take advantage of opportunities such as working from home, using an available conference room, or moving to an empty part of the office when you really need to focus.

11. Break projects down into chunks

Facing the entirety of a complex project is a daunting mission. It’s better and more effective to break a project down into subcomponents, and then focus on these separately, one at a time.

For instance, say you want to migrate users, computers, and services from one Active Directory domain to another. This would be overwhelming to focus on at once, so the best way to proceed is to divide the project into tasks. One task could be migrating user accounts and permissions. The next task could be migrating computer accounts, and the task after that could be addressing DNS changes, and so on. Plan it out in advance, and then tackle it piece-by-piece.

12. Control your calendar

Don’t let colleagues fill in your day with meaningless meetings. Have a conversation with your coworkers about which meetings are absolutely necessary for you to participate in and skip the rest. If you are a manager or leader, encourage your employees to schedule in-person meetings only when they are absolutely necessary.

13. Don’t take your phone into your bedroom

You spend enough time on screens during the day. The simple act of charging your phone in another room gives you time to really disconnect. It also gives you a chance to wake up refreshed, and think about the day ahead before reactively reaching for your device and checking social media or email.

SEE: Research: The evolution of enterprise software UX (Tech Pro Research)

Reducing team cognitive overload

ElNaggar and I also thought of a couple of tips for business leaders on ways to reduce cognitive overload for their team. These tips include:

14. Invest in the right technology

Take the time to learn what processes or tools are pain points for your employees’ productivity. Research which solutions can automate certain tasks or limit daily distractions and implement them across your workforce.

15. Embrace employee-centric workflows

ElNaggar says that leaders “embrace the idea that employee experience matters, which will have a ripple effect in their organization.” He recommends that leaders start to develop more employee-centric workflows that reduce interruptions for their employees to help them focus on priorities and accomplish more work.

An example of an employee-centric workflow would be a business application or web portal, which gives employees a single, actionable view into all of their systems and breaks down complex processes into single-purpose, streamlined workflows, allowing employees to be more productive.

“Without leadership teams championing an employee-centric mindset, nothing will really change in the mid and lower levels of a company. Business leaders must start thinking about the impact their employees’ digital experience has on their work performance and overall satisfaction, and support the idea that investing in employee experience will drive employee engagement and productivity,” ElNaggar concluded.

Originally Posted On: informationweek.com

There might be a better, knowledge management-based, way to conduct the US Census, according to a group of university researchers.

Consider for a minute whether the best way to collect important data is to mail 125 million (or so) paper forms, often to “Current Occupant,” and to then follow up with humans carrying clipboards and ringing doorbells. You probably would conclude that it’s a lot of work and a process likely to result in the collection of incomplete or inaccurate data.

Then, you’ll update that data only every 10 years: Lots can change in 10 years. Yet, you will use the collected data to determine things like how your congressional representatives will be elected, how federal funds are allocated to local schools, even where new roads will be built and public transportation offered.

Is there a better way to do the US Census than how it has been done for 228 years?

A group of university researchers believes that the data gathered and analyzed by the US Census Bureau can be found in existing sources without sending any forms or people out into the field. Actually, the researchers argue that the government can collect much more data and more timely data using sources like tax returns, state websites, even Google search data.

“The costs of a census are pretty large, $17.5 billion. That’s based on these paper forms. That’s really the driver behind our research,” says Murray Jennex, a professor focused on knowledge management at San Diego State University. “The Census Bureau has spent a lot of money for technology to analyze data, but very little on collecting data,” he added during a recent interview.

Jennex was part of the team that included San Diego State professors James Kelly (lead author), Kaveh Abhari and Eric Frost, along with Alexandra Durcikova of the University of Oklahoma. Together, they authored a research paper titled, “Data in the Wild: A KM Approach to doing a Census Without Asking Anyone and the Issue of Privacy.” That paper will be presented in January at the Hawaii International Conference on System Sciences.

While the cost of paper census surveys — including the one scheduled for 2020 — is a key consideration in the team’s research, there are several other major factors.

One such consideration is the growing abundance of data in the public sphere, such as that collected by many federal  — the Internal Revenue Service, Department of Education, Department of Labor for example — state and municipal agencies, and academic research organizations. Add in the trend data that can be gleaned from search engines such as Google, public utility records, and commercial data services such as the major consumer credit bureaus. Together they represent a wealth of data, highlighting how many people live where, areas where poverty is most challenging, ethnic trends, and the need for elderly, healthcare, and educational support.

In addition, that data can be updated and analyzed in what Jennex calls “not quite real time.” “The data we would be using could be refreshed every year, and could be used to guide public policies,” he said.

Murray Jennex, San Diego State

The limiting factor, however, is that of privacy, how the Census Bureau could protect personally identifiable information (PII). Jennex notes that data can be anonymized by stripping off PII, which would be effective protection when the data analysis covers large areas, even five-digit ZIP codes. But it might not take a lot of work for someone to identify unique individuals or families at a neighborhood level, particularly those who stand out in the neighborhood by income, size of household, or ethnic background.

So, protections would have to be put in place.

Another hurdle that the researchers acknowledge is that “government is actually very bad at sharing data.” For decades, government agencies have tended to keep their data siloed, despite attempts by some government leaders to move to an open data approach. Jennex cited the IRS as a particularly rich data source, not only for basic financial data but also for insight into household size, health issues, employment trends, and even transportation planning as more Americans work out of home offices.

Existing data, such as that from the IRS, actually can be more accurate than that currently collected through census forms — known as the American Community Survey. In their paper the researchers cited how “household income” can be misleading, depending on whether household members are married or unrelated. Also, the income questions focus on what someone made in a single year, not factoring in that the individual year’s earnings were significantly higher or lower than what they earn in a more typical year.

However, don’t expect the paper questionnaire to go away in the year and a half before you expect to find one in your mail. The changes that the researchers suggest are much further down the road.

Jim Connolly is a versatile and experienced technology journalist who has reported on IT trends for more than two decades. As Executive Managing Editor of InformationWeek, he oversees the day-to-day planning and editing on the site. Most recently he has been editor of UBM’s … View Full Bio

Originally Posted On: informationweek.com

When hiring gets tough, IT leaders get strategic. Here’s how successful organizations seize the experts their competitors’ only wish they could land.

The technology industry’s unemployment rate is well below the national average, forcing companies to compete aggressively for top talent. When presented with a range of recruitment strategies by a recent Robert Half Technology questionnaire — including using recruiters, providing job flexibility and offering more pay — most IT decision makers said they are likely to try all approaches in order to land the best job candidates for their teams.

“We’re currently in a very competitive hiring market,” noted Ryan Sutton, district president for Robert Half Technology. “Employers want to hire the best talent to help keep their organization’s information safe, but so do a lot of other companies.”

Robert Half’s research finds that software development and data analytics experts are the most challenging to hire. Many other talents are scarce, too. “Some of the most in-demand skills right now include cloud security, security engineering, software engineering, DevOps, business intelligence and big data, as well as expertise in Java full-stack, ReactJS and AngularJS,” Sutton said.

What works

Finding qualified job candidates typically requires using a combination of strategies. But it’s also important to be able to move quickly. “At the core of the labor market now is a demand for speed and efficiency in the hiring process, but don’t confuse an expeditious process with a hastily made decision,” Sutton warned. “Some smart options would be to work with a specialized recruiter who knows your local market well; increasing the pay and benefits package to better attract a top candidate; and losing some of the skills requirements on your job description that aren’t must-haves to widen your talent pool.” He also reminded hiring managers to not underestimate the power of networking. “Let your contacts know you’re looking to hire for a certain position.”

Look beyond the typical sources, suggested Art Langer, a professor and director of the Center for Technology Management at Columbia University and founder and chairman of Workforce Opportunity Services (WOS), a nonprofit organization that connects underserved and veteran populations with IT jobs. “There is a large pool of untapped talent from underserved communities that companies overlook,” he explained. Businesses are now competing in a global market. “New technology allows us to connect with colleagues and potential partners around the world as easily as with our neighbors,” Langer said. “Companies hoping to expand overseas can benefit from employees who speak multiple languages.”

Companies need to explore different models of employment if they want access to the best and the brightest job candidates, observed Nick Hamm, CEO of 10K Advisors, a Salesforce consulting firm. “Some of the most talented professionals are choosing to leave full-time employment to pursue freelancing careers or start their own small consulting companies as a way to gain more balance or reduce commute times,” he advised. “If companies want access to these individuals, they’ll need the right processes and mindset in place to incorporate contract employees into core teams.” Using a talent broker to find the right experts, vet them and apply them inside an organization to solve business problems can alleviate many of the challenges people may now have tapping into the gig economy, Hamm added.

John Samuel, CIO, of Computer Generated Solutions, a business applications, enterprise learning and outsourcing services company, advised building some flexibility into job descriptions and requirements. “In this tight job market, a good way is to find candidates with the right attitude and a solid foundation and then train them in areas where they lack experience,” he said. Like Sutton, Samuel believes that many job descriptions are unrealistic, listing many requirements that aren’t core to the job’s role. “Rather than limiting your potential pool of candidates, simplify the job description to include your core requirements to entice applicants to fill open roles,” Samuel recommended.

Mike Weast, regional IT vice president at staffing firm Addison Group, urged hiring managers not to rely on software searches, no matter how intuitive they may claim to be, to uncover qualified job candidates. “There’s a lot of talk about using AI to find qualified candidates, but recruiters are needed to bridge the AI gap,” he claimed. “AI doesn’t qualify a candidate for showing up on time, having a strong handshake or making eye contact when communicating.”

Training current employees to meet the requirements of a vacant position is an often-overlooked method of acquiring experts. “It always makes sense to give existing employees the opportunity to expand their knowledge base and transition into vacant positions,” explained Lori Brock, head of innovation, Americas, for OSRAM, a multinational lighting manufacturer headquartered in Munich. “The roles within IT are merging with the traditional R&D functions as well as with roles in manufacturing, procurement, sales, marketing and more,” she added. “We can no longer consider jobs in IT fields as belonging to an IT silo within any organization.”

Last thought

It’s important to pounce quickly when finding a skilled, qualified job candidate. “Now is certainly not the time to be slow to hire,” Sutton said. “It’s a candidate’s market and they are well aware of the opportunities available to them.”

For more on IT hiring and management check out these recent articles.

John Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic … View Full Bio

Originally Posted On: techrepublic.com

Three jobs completely new to the IT industry will be data trash engineer, virtual identity defender, and voice UX designer, according to Cognizant.

With technology flooding the enterprise, many people fear the emergence of tech will take over their jobs. However, tech like artificial intelligence (AI) and machine learning will actually create more jobs for humans, according to a recent Cognizant report. The report outlines 21 “plausible and futuristic” jobs that will surface in the next decade.

The 21 jobs follow three major underlying themes: Ethical behaviors, security and safety, and dreams, said the report. These themes come from humans’ deeper aspirations for the future of the enterprise and daily life. Humans want machines to be ethical; humans want to feel safe in a technologically-fueled future; and humans always dreamt of a futuristic world, which is coming to fruition, according to the report.

SEE: Artificial intelligence: Trends, obstacles, and potential wins (Tech Pro Research)

Some of the jobs on Cognizant’s list could spark life-long careers, and some positions might be more fleeting, said the report. Here are the 21 jobs of the future:

  1. Cyber attack agent
  2. Voice UX designer
  3. Smart home design manager
  4. Algorithm bias auditor
  5. Virtual identity defender
  6. Cyber calamity forecaster
  7. Head of machine personality design
  8. Data trash engineer
  9. Uni4Life coordinator
  10. Head of business behavior
  11. Joy adjutant
  12. Juvenile cybercrime rehabilitation counselor
  13. Tidewater architect
  14. Esports arena builder
  15. VR arcade manager
  16. Vertical farm consultant
  17. Machine risk officer
  18. Flying car developer
  19. Haptic interface programmer
  20. Subscription management specialist
  21. Chief purpose planner

Three of the positions would be completely new in the IT world: Data trash engineer, virtual identity defender, and voice UX designer. A data trash engineer would be responsible for using unused data in an organization to find hidden insights, said the report; a virtual identity defender would lead a team to make a company’s business goal a reality; and a voice UX designer will use diagnostic tools, algorithms, and more to create the perfect voice assistant, said the report.

Click here for descriptions of all 21 positions.

The big takeaways for tech leaders:

  • Emerging tech will actually create a whole new set of jobs for humans in the next 10 years, with some having more staying power than others. — Cognizant, 2018
  • The tech jobs of the future all follow three underlying themes that humans share: Ethical behaviors, security and safety, and dreams. — Cognizant, 2018

Originally Posted On: news.fiu.edu

Some skills are considered too “small” or specific to become a degree program and aren’t often listed on a student’s academic transcript. Yet, it’s a collection of these very skills that employers know are a big deal in the rapidly-changing 21st century workforce.

This is where badges come in. These digital icons represent achievements or skills in a certain area or subject matter. A form of ‘micro-credentialing,’ badges allow students to break down their educational experience – competency by competency – and tell the complete story of their educational journey to potential employers.

Today, badges are a rising trend in the rapidly changing world of higher education. In fact, according to a 2016 survey by the University Professional and Continuing Education Association, one in five colleges has issued a digital badge.

Randy Pestana and Brian Fonseca – from the Jack D. Gordon Institute for Public Policy in the Steven J. Green School of International and Public Affairs – understand the urgency behind bringing this new form of credentialing to FIU. The skills gap – or the mismatch between what employers are looking for and job candidates have to offer – dominates their conversations with industry partners.

“They continue to tell us that job candidates don’t have the skills they need,” Pestana said. “Employers are looking for people who not only have a deep knowledge of a specific subject matter, but also a wide array of other skills that allow them to work across a variety of other subject areas.”

In an attempt to begin to close this gap and give students from all majors and disciplines the opportunity to build the skills that matter most in the 21st century – and still graduate in four years – Pestana and Fonseca began working on building a badge program at FIU.

They started with a subject area that has major implications for all industries and sectors: cybersecurity.

“Hospitality, healthcare, government, law, business – there isn’t an industry that isn’t susceptible to cyberattacks,” Pestana said. “These badges give the basic knowledge everyone needs to know, because anyone can be targeted by a cyberattack and have their personal information compromised.”

Collaborating across the university, Pestana and Fonseca brought in expertise from FIU’s Division of Information Technology, College of Business, College of Engineering & Computing, College of Law and StartUp FIU to create six badges. They are focused on different areas related to cybersecurity, including the Internet of Things, blockchain, cryptocurrencies and cybersecurity policy and law.

To earn a badge, students attend a Saturday workshop, which includes a lecture and active learning exercise. If students earn all six badges, they will also earn a certificate in cybersecurity fundamentals.

Cybersecurity was a natural place to begin offering badges.

FIU is a nationally recognized hub for interdisciplinary cybersecurity study and research and is focused on helping grow a future pipeline of cybersecurity professionals. In fact, earlier this year, FIU was selected to be the educational partner and host of the 2018 National Initiative for Cybersecurity Education (NICE) Conference and Expo, which aims to bring together higher education and industry to address growing cybersecurity workforce shortages.

The cybersecurity badges are just the beginning of a broader initiative to bring more 21st century workforce competencies to FIU.

A special interdisciplinary committee led by Senior Vice President for Academic and Student Affairs Elizabeth Bejar – and which includes members from academic and student-services units across the institution – will be working closely with local industry partners to explore bringing new badge programs to the university.

“FIU is always looking toward the future – that’s who we are,” Bejar said. “We’re here to educate lifelong learners and ensure they have the relevant, just-in-time skills that put them at a competitive advantage in our 21st century workforce.”

Originally Posted On: edsource.org

No degree means diminished opportunities, study finds

Millions of Californians who began their college education but never finished deserve special support and policy changes to help get them across the finish line later in life, a new report urges.

The study from the non-partisan California Competes organization estimates that 4 million Californians, ages 25 to 64, earned some college credits at various times but no associate or bachelor’s degrees and are not in school now. As a result, their employment and financial prospects have suffered and they face “diminishing opportunities in labor markets that increasingly rely on workers with degrees,” said the report entitled “Back to College: California’s Imperative to Re-Engage Adults.”

The report found that those adults with some college but no degree are significantly less likely to earn more than $75,000 a year compared to those who have at least an associate degree from a community college. Only 14 percent of those who didn’t finish their degrees earn in that upper income bracket, compared to 36 percent of those who have degrees (and 5 percent of those with just high school or less).

Not surprisingly, fewer of those adults who have some college credits but no diploma own homes and have full health insurance compared to graduates. And other research shows that non-completers have higher default rates on college loans, with unhappy consequences.

“We’ve already invested in folks who haven’t crossed the finish line. So our argument is that it makes sense to help them get across the finish line to benefit the broader California economy and to boost their individual prosperity,” Lande Ajose, executive director of California Competes, said in an interview. That Oakland-based organization analyzes ways to improve higher education in the state and how such reforms can aid the economy. Ajose is also chairwoman of the California Student Aid Commission, which administers Cal Grants.

The study showed ethnic disparities for college completion among California adults between ages 25 and 64, with higher rates for whites and Asians than for Latinos and blacks. Sixty two percent of Asians of that age had earned a degree, compared to 53 percent of whites, 34 percent of blacks and 18 percent of Latinos.

However, black adults showed the highest rate (28 percent) of their ethnic group who started but did not finish a degree, followed by whites (23 percent), Latinos (17 percent) and Asians (13 percent).

Among the roadblocks facing adults who want to return to college are limitations on financial aid that don’t affect most traditional age students, noted the report.

For example, federal Pell Grants are available for only 12 semesters over a person’s life and many of these adults are likely to have already used that allotment up years ago. Because of qualification rules and limits on expenditures, state-funded Cal Grants are very difficult to obtain for people who are older than 28 and several years out of high school. State officials are looking at ways to improve Cal Grants, including making them more available to people who attend community college years after high school.

“The inadequate financial aid options available to returning adults exacerbate the economic trends” that hurt the earning potential of people without degrees, the report said. In addition those people face personal and scheduling problems juggling work and family issues with their studies if they want to complete their degrees.

In addition, the report described poor coordination among California’s higher education systems and resulting “structural barriers that impede adults’ abilities to return to school.” Those include difficult access to academic transcripts and older data among different colleges and universities if an adult started at one or two campuses and seeks to finish at another, it said.

While describing problems, the report does not offer specific suggestions for improvements. California Competes officials said they expect a second report to do so by year’s end.

Adults without college degrees or certificates are at the center of a much-discussed effort in California. State leaders hope that the opening of a new on-line community college late next year will offer training and extra education for skilled jobs in fast growing industries. Those credentials are intended mainly to be completed in a year or less.

However, most adults who want to finish the more traditional associate or bachelor’s degrees still must attend the state’s other 114 community colleges or a four-year university. Adult students currently can take some online courses offered at those schools.

However, Ajose said college campuses should make their class schedules and other services more flexible to serve older students.

Meanwhile, a separate new report shows that students who took out federal student loans for college but never finished degrees default at high rates and face many problems as a result. Twenty three percent of borrowers who started college in 2003-04 defaulted within 12 years compared to 11 percent of those who completed, according to a policy brief by the The Institute for College Access and Success (TICAS).

Defaulters face “stark and immediate consequences” that could include fines, wage garnishment, lost job opportunities and suspended driver’s and professional licences, said the report entitled “The Self-Defeating Consequences of Student Loan Default.” TICAS, a non-partisan research and policy group with offices in Oakland and Washington, D.C., called for reforms that would lift some of the most burdensome penalties and make easier to enroll in income-based repayment plans.

Originally Posted On: technologyreview.com

Online versions of college courses are attracting hundreds of thousands of students, millions of dollars in funding, and accolades from university administrators. Is this a fad, or is higher education about to get the overhaul it needs?

Written By: 

A hundred years ago, higher education seemed on the verge of a technological revolution. The spread of a powerful new communication network—the modern postal system—had made it possible for universities to distribute their lessons beyond the bounds of their campuses. Anyone with a mailbox could enroll in a class. Frederick Jackson Turner, the famed University of Wisconsin historian, wrote that the “machinery” of distance learning would carry “irrigating streams of education into the arid regions” of the country. Sensing a historic opportunity to reach new students and garner new revenues, schools rushed to set up correspondence divisions. By the 1920s, postal courses had become a full-blown mania. Four times as many people were taking them as were enrolled in all the nation’s colleges and universities combined.

The hopes for this early form of distance learning went well beyond broader access. Many educators believed that correspondence courses would be better than traditional on-campus instruction because assignments and assessments could be tailored specifically to each student. The University of Chicago’s Home-Study Department, one of the nation’s largest, told prospective enrollees that they would “receive individual personal attention,” delivered “according to any personal schedule and in any place where postal service is available.” The department’s director claimed that correspondence study offered students an intimate “tutorial relationship” that “takes into account individual differences in learning.” The education, he said, would prove superior to that delivered in “the crowded classroom of the ordinary American University.”

We’ve been hearing strikingly similar claims today. Another powerful communication network—the Internet—is again raising hopes of a revolution in higher education. This fall, many of the country’s leading universities, including MIT, Harvard, Stanford, and Princeton, are offering free classes over the Net, and more than a million people around the world have signed up to take them. These “massive open online courses,” or MOOCs, are earning praise for bringing outstanding college teaching to multitudes of students who otherwise wouldn’t have access to it, including those in remote places and those in the middle of their careers. The online classes are also being promoted as a way to bolster the quality and productivity of teaching in general—for students on campus as well as off. Former U.S. secretary of education William Bennett has written that he senses “an Athens-like renaissance” in the making. Stanford president John Hennessy told the New Yorker he sees “a tsunami coming.”

The excitement over MOOCs comes at a time of growing dissatisfaction with the state of college education. The average price tag for a bachelor’s degree has shot up to more than $100,000. Spending four years on campus often leaves young people or their parents weighed down with big debts, a burden not only on their personal finances but on the overall economy. And many people worry that even as the cost of higher education has risen, its quality has fallen. Dropout rates are often high, particularly at public colleges, and many graduates display little evidence that college improved their critical-thinking skills. Close to 60 percent of Americans believe that the country’s colleges and universities are failing to provide students with “good value for the money they and their families spend,” according to a 2011 survey by the Pew Research Center. Proponents of MOOCs say the efficiency and flexibility of online instruction will offer a timely remedy.

But not everyone is enthusiastic. The online classes, some educators fear, will at best prove a distraction to college administrators; at worst, they will end up diminishing the quality of on-campus education. Critics point to the earlier correspondence-course mania as a cautionary tale. Even as universities rushed to expand their home-study programs in the 1920s, investigations revealed that the quality of the instruction fell short of the levels promised and that only a tiny fraction of enrollees actually completed the courses. In a lecture at Oxford in 1928, the eminent American educator Abraham Flexner delivered a withering indictment of correspondence study, claiming that it promoted “participation” at the expense of educational rigor. By the 1930s, once-eager faculty and administrators had lost interest in teaching by mail. The craze fizzled.

Is it different this time? Has technology at last advanced to the point where the revolutionary promise of distance learning can be fulfilled? We don’t yet know; the fervor surrounding MOOCs makes it easy to forget that they’re still in their infancy. But even at this early juncture, the strengths and weaknesses of this radically new form of education are coming into focus.

Rise of the MOOCs

“I had no clue what I was doing,” Sebastian Thrun says with a chuckle, as he recalls his decision last year to offer Stanford’s Introduction to Artificial Intelligence course free online. The 45-year-old robotics expert had a hunch that the class, which typically enrolls a couple of hundred undergraduates, would prove a draw on the Net. After all, he and his co-professor, Peter Norvig, were both Silicon Valley stars, holding top research posts at Google in addition to teaching at Stanford. But while Thrun imagined that enrollment might reach 10,000 students, the actual number turned out to be more than an order of magnitude higher. When the class began, in October 2011, some 160,000 people had signed up.

The experience changed Thrun’s life. Declaring “I can’t teach at Stanford again,” he announced in January that he was joining two other roboticists to launch an ambitious educational startup called Udacity. The venture, which bills itself as a “21st-century university,” is paying professors from such schools as Rutgers and the University of Virginia to give open courses on the Net, using the technology originally developed for the AI class. Most of the 14 classes Udacity offers fall into the domains of computer science and mathematics, and Thrun says it will concentrate on such fields for now. But his ambitions are hardly narrow: he sees the traditional university degree as an outdated artifact and believes Udacity will provide a new form of lifelong education better suited to the modern labor market.

Udacity is just one of several companies looking to capitalize on the burgeoning enthusiasm for MOOCs. In April, two of Thrun’s colleagues in Stanford’s computer science department, Daphne Koller and Andrew Ng, rolled out a similar startup called Coursera. Like Udacity, Coursera is a for-profit business backed with millions of dollars in venture capital. Unlike Udacity, Coursera is working in concert with big universities. Where Thrun wants to develop an alternative to a traditional university, Koller and Ng are looking to build a system that established schools can use to deliver their own classes over the Net. Coursera’s original partners included not only Stanford but Princeton, Penn, and the University of Michigan, and this summer the company announced affiliations with 29 more schools. It already has about 200 classes on offer, in fields ranging from statistics to sociology.

On the other side of the country, MIT and Harvard joined forces in May to form edX, a nonprofit that is also offering tuition-free online classes to all comers. Bankrolled with $30 million from each school, edX is using an open-source teaching platform developed at MIT. It includes video lessons and discussion forums similar to those offered by its for-profit rivals, but it also incorporates virtual laboratories where students can carry out simulated experiments. This past summer, the University of California at Berkeley joined edX, and in September the program debuted its first seven classes, mainly in math and engineering. Overseeing the launch of edX is Anant Agarwal, the former director of MIT’s Computer Science and Artificial Intelligence Laboratory.

The leaders of Udacity, Coursera, and edX have not limited their aspirations to enhancing distance learning. They believe that online instruction will become a cornerstone of the college experience for on-campus students as well. The merging of virtual classrooms with real classrooms, they say, will propel academia forward. “We are reinventing education,” declares Agarwal. “This will change the world.”

Professor Robot

Online courses aren’t new; big commercial outfits like the University of Phoenix and DeVry University offer thousands of them, and many public colleges allow students to take classes on the Net for credit. So what makes MOOCs different? As Thrun sees it, the secret lies in “student engagement.” Up to now, most Internet classes have consisted largely of videotaped lectures, a format that Thrun sees as deeply flawed. Classroom lectures are in general “boring,” he says, and taped lectures are even less engaging: “You get the worst part without getting the best part.” While MOOCs include videos of professors explaining concepts and scribbling on whiteboards, the talks are typically broken up into brief segments, punctuated by on-screen exercises and quizzes. Peppering students with questions keeps them involved with the lesson, Thrun argues, while providing the kind of reinforcement that has been shown to strengthen comprehension and retention.

Norvig, who earlier this year taught a Udacity class on computer programming, points to another difference between MOOCs and their predecessors. The economics of online education, he says, have improved dramatically. Cloud computing facilities allow vast amounts of data to be stored and transmitted at very low cost. Lessons and quizzes can be streamed free over YouTube and other popular media delivery services. And social networks like Facebook provide models for digital campuses where students can form study groups and answer each other’s questions. In just the last few years, the cost of delivering interactive multimedia classes online has dropped precipitously. That’s made it possible to teach huge numbers of students without charging them tuition.

It’s hardly a coincidence that Udacity, Coursera, and edX are all led by computer scientists. To fulfill their grand promise—making college at once cheaper and better—MOOCs will need to exploit the latest breakthroughs in large-scale data processing and machine learning, which enable computers to adjust to the tasks at hand. Delivering a complex class to thousands of people simultaneously demands a high degree of automation. Many of the labor-intensive tasks traditionally performed by professors and teaching assistants—grading tests, tutoring, moderating discussions—have to be done by computers. Advanced analytical software is also required to parse the enormous amounts of information about student behavior collected during the classes. By using algorithms to spot patterns in the data, programmers hope to gain insights into learning styles and teaching strategies, which can then be used to refine the technology further. Such artificial-intelligence techniques will, the MOOC pioneers believe, bring higher education out of the industrial era and into the digital age.

While their ambitions are vast, Thrun, Koller, and Agarwal all stress that their fledgling organizations are just starting to amass information from their courses and analyze it. “We haven’t yet used the data in a systematic way,” says Thrun. It will be some time before the companies are able to turn the information they’re collecting into valuable new features for professors and students. To see the cutting edge in computerized teaching today, you have to look elsewhere—in particular, to a small group of academic testing and tutoring outfits that are hard at work translating pedagogical theories into software code.

One of the foremost thinkers in this field is a soft-spoken New Yorker named David Kuntz. In 1994, after earning his master’s degree in philosophy and working as an epistemologist, or knowledge theorist, for the Law School Admission Council (the organization that administers the LSAT examinations), Kuntz joined the Educational Testing Service, which runs the SAT college-admission tests. ETS was eager to use the burgeoning power of computers to design more precise exams and grade them more efficiently. It set Kuntz and other philosophers to work on a very big question: how do you use software to measure meaning, promote learning, and evaluate understanding? The question became even more pressing when the World Wide Web opened the Internet to the masses. Interest in “e-learning” surged, and the effort to develop sophisticated teaching and testing software combined with the effort to design compelling educational websites.

Three years ago, Kuntz joined a small Manhattan startup called Knewton as its head of research. The company specializes in the budding discipline of adaptive learning. Like other trailblazers in instructional software, including the University of California-Irvine spinoff ALEKS, Carnegie Mellon’s Open Learning Initiative, and the much celebrated Khan Academy, it is developing online tutoring systems that can adapt to the needs and learning styles of individual students as they proceed through a course of instruction. Such programs, says Kuntz, “get better as more data is collected.” Software for, say, teaching algebra can be written to reflect alternative theories of learning, and then, as many students proceed through the program, the theories can be tested and refined and the software improved. The bigger the data sets, the more adept the systems become at providing each student with the right information in the right form at the right moment.

Knewton has introduced a remedial math course for incoming college students, and its technology is being incorporated into tutoring programs offered by the textbook giant Pearson. But Kuntz believes that we’re only just beginning to see the potential of educational software. Through the intensive use of data analysis and machine learning techniques, he predicts, the programs will advance through several “tiers of adaptivity,” each offering greater personalization through more advanced automation. In the initial tier, which is already largely in place, the sequence of steps a student takes through a course depends on that student’s choices and responses. Answers to a set of questions may, for example, trigger further instruction in a concept that has yet to be mastered—or propel the student forward by introducing material on a new topic. “Each student,” explains Kuntz, “takes a different path.” In the next tier, which Knewton plans to reach soon, the mode in which material is presented adapts automatically to each student. Although the link between media and learning remains controversial, many educators believe that different students learn in different ways. Some learn best by reading text, others by watching a demonstration, others by playing a game, and still others by engaging in a dialogue. A student’s ideal mode may change, moreover, at each stage in a course—or even at different times during the day. A video lecture may be best for one lesson, while a written exercise may be best for the next. By monitoring how students interact with the teaching system itself—when they speed up, when they slow down, where they click—a computer can learn to anticipate their needs and deliver material in whatever medium promises to maximize their comprehension and retention.

Looking toward the future, Kuntz says that computers will ultimately be able to tailor an entire “learning environment” to fit each student. Elements of the program’s interface, for example, will change as the computer senses the student’s optimum style of learning.

Big Data on Campus

The advances in tutoring programs promise to help many college, high-school, and even elementary students master basic concepts. One-on-one instruction has long been known to provide substantial educational benefits, but its high cost has constrained its use, particularly in public schools. It’s likely that if computers are used in place of teachers, many more students will be able to enjoy the benefits of tutoring. According to one recent study of undergraduates taking statistics courses at public universities, the latest of the online tutoring systems seem to produce roughly the same results as face-to-
face instruction.

While MOOCs are incorporating adaptive learning routines into their software, their ambitions for data mining go well beyond tutoring. Thrun says that we’ve only seen “the tip of the iceberg.” What particularly excites him and other computer scientists about free online classes is that thanks to their unprecedented scale, they can generate the immense quantities of data required for effective machine learning. Koller says that Coursera has set up its system with intensive data collection and analysis in mind. Every variable in a course is tracked. When a student pauses a video or increases its playback speed, that choice is captured in the Coursera database. The same thing happens when a student answers a quiz question, revises an assignment, or comments in a forum. Every action, no matter how inconsequential it may seem, becomes grist for the statistical mill.

Assembling information on student behavior at such a minute level of detail, says Koller, “opens new avenues for understanding learning.” Previously hidden patterns in the way students navigate and master complex subject matter can be brought to light.

The number-crunching also promises to benefit teachers and students directly, she adds. Professors will receive regular reports on what’s working in their classes and what’s not. And by pinpointing “the most predictive factors for success,” MOOC software will eventually be able to guide each student onto “the right trajectory.” Koller says she hopes that Lake Wobegon, the mythical town in which “all students are above average,” will “come to life.”

MIT and Harvard are designing edX to be as much a tool for educational research as a digital teaching platform, Anant Agarwal says. Scholars are already beginning to use data from the system to test hypotheses about how people learn, and as the portfolio of courses grows, the opportunities for research will proliferate. Beyond generating pedagogical insights, Agarwal foresees many other practical applications for the edX data bank. Machine learning may, for instance, pave the way for an automated system to detect cheating in online classes, a challenge that is becoming more pressing as universities consider granting certificates or even credits to students who complete MOOCs.

With a data explosion seemingly imminent, it’s hard not to get caught up in the enthusiasm of the MOOC architects. Even though their work centers on computers, their goals are deeply humanistic. They’re looking to use machine learning to foster student learning, to deploy artificial intelligence in the service of human intelligence. But the enthusiasm should be tempered by skepticism. The benefits of machine learning in education remain largely theoretical. And even if AI techniques generate genuine advances in pedagogy, those breakthroughs may have limited application. It’s one thing for programmers to automate courses of instruction when a body of knowledge can be defined explicitly and a student’s progress measured precisely. It’s a very different thing to try to replicate on a computer screen the intricate and sometimes ineffable experiences of teaching and learning that take place on a 
college campus.

The promoters of MOOCs have a “fairly naïve perception of what the analy­sis of large data sets allows,” says Timothy Burke, a history professor at Swarthmore College. He contends that distance education has historically fallen short of expectations not for technical reasons but, rather, because of “deep philosophical problems” with the model. He grants that online education may provide efficient training in computer programming and other fields characterized by well-established procedures that can be codified in software. But he argues that the essence of a college education lies in the subtle interplay between students and teachers that cannot be simulated by machines, no matter how sophisticated the programming.

Alan Jacobs, a professor of English at Wheaton College in Illinois, raises similar concerns. In an e-mail to me, he observed that the work of college students “can be affected in dramatic ways by their reflection on the rhetorical situations they encounter in the classroom, in real-time synchronous encounters with other people.” The full richness of such conversations can’t be replicated in Internet forums, he argued, “unless the people writing online have a skilled novelist’s ability to represent complex modes of thought and experience in prose.” A computer screen will never be more than a shadow of a good college classroom. Like Burke, Jacobs worries that the view of education reflected in MOOCs has been skewed toward that of the computer scientists developing the platforms.

Flipping the Classroom

The designers and promoters of MOOCs don’t suggest that computers will make classrooms obsolete. But they do argue that online instruction will change the nature of teaching on campus, making it more engaging and efficient. The traditional model of instruction, where students go to class to listen to lectures and then head off on their own to complete assignments, will be inverted. Students will listen to lectures and review other explanatory material alone on their computers (as some middle-school and high-school students already do with Khan Academy videos), and then they’ll gather in classrooms to explore the subject matter more deeply—through discussions with professors, say, or through lab exercises. In theory, this “flipped classroom” will allocate teaching time more rationally, enriching the experience of both professor and student.

Here, too, there are doubts. One cause for concern is the high dropout rate that has plagued the early MOOCs. Of the 160,000 people who enrolled in Norvig and Thrun’s AI class, only about 14 percent ended up completing it. Of the 155,000 students who signed up for an MIT course on electronic circuits earlier this year, only 23,000 bothered to finish the first problem set. About 7,000, or 5 percent, passed the course. Shepherding thousands of students through a college class is a remarkable achievement by any measure—typically only about 175 MIT students finish the circuits course each year—but the dropout rate highlights the difficulty of keeping online students attentive and motivated. Norvig acknowledges that the initial enrollees in MOOCs have been an especially self-motivated group. The real test, particularly for on-campus use of online instruction, will come when a broader and more typical cohort takes the classes. MOOCs will have to inspire a wide variety of students and retain their interest as they sit in front of their computers through weeks of study.

The greatest fear among the critics of MOOCs is that colleges will rush to incorporate online instruction into traditional classes without carefully evaluating the possible drawbacks. Last fall, shortly before he cofounded Coursera, Andrew Ng adapted his Stanford course on machine learning so that online students could participate, and thousands enrolled. But at least one on-campus student found the class wanting. Writing on his blog, computer science major Ben Rudolph complained that the “academic rigor” fell short of Stanford’s standards. He felt that the computerized assignments, by providing automated, immediate hints and guidance, failed to encourage “critical thinking.” He also reported a sense of isolation. He “met barely anyone in [the] class,” he said, because “everything was done alone in my room.” Ng has staunchly defended the format of the class, but the fact is that no one really knows how an increasing stress on computerized instruction will alter the dynamics of college life.

The leaders of the MOOC movement acknowledge the challenges they face. Perfecting the model, says Agarwal, will require “sophisticated inventions” in many areas, from grading essays to granting credentials. This will only get harder as the online courses expand further into the open-ended, exploratory realms of the liberal arts, where knowledge is rarely easy to codify and the success of a class can hinge on a professor’s ability to guide students toward unexpected insights. The outcome of this year’s crop of MOOCs should tell us a lot more about the value of the classes and the role they’ll ultimately play in the educational system.

At least as daunting as the technical challenges will be the existential questions that online instruction raises for universities. Whether massive open courses live up to their hype or not, they will force college administrators and professors to reconsider many of their assumptions about the form and meaning of teaching. For better or worse, the Net’s disruptive forces have arrived at the gates of academia.


Nicholas Carr is the author of The Shallows: What the Internet Is Doing to Our Brains. His last article for MIT Technology Review was “The Library of Utopia.”

Originally Posted On: informationweek.com

Amazon recently proved it isn’t infallible when it shut down a human resources system that was systematically biased against women. However, there’s more to the story that today’s enterprise leaders should know.

When people talk about machine learning masters, Amazon is always top-of-mind. For more than two decades, the company’s recommendation capabilities have been coveted by others hoping to imitate it. However, even Amazon hasn’t mastered machine learning completely, as evidenced by a biased HR system it shut down. What may be surprising to some is the reality of the underlying situation, which is that biased data isn’t just a technical problem, it’s a business problem.

Specifically, Reuters and others recently reported that since 2014 Amazon had been using a recruiting engine that was systematically biased against women seeking technical positions. It doesn’t necessarily follow that Amazon is biased against tech-savvy women, but the situation does seem to indicate that the historical data used to train the system included more males than females.

Historically, more men have held technical positions than women, generally speaking, not just at Amazon. At the present time, the world is comprised of about half men and half women, with one sex more predominant in some cultures than others. However, women hold 26% of “professional computing occupations”. If the dataset represents that three out of four workers in a technical position are men, then it follows an AI trained on the data would reflect the underlying data.

Amazon is now faced with a public relations fiasco even though it abandoned the system. According to a spokesperson, it “was never used by Amazon recruiters to evaluate candidates.” It was used in a trial phase, never independently and never rolled out to a larger group. The project was abandoned a couple years ago for many reasons, including that it never returned strong candidates for a role. Interestingly, the company claims that bias wasn’t the issue.

If bias isn’t the issue, then what is?

There’s no doubt that the outcome of Amazon’s HR system was biased. Biased data produces biased outcomes. However, there is another important issue not identified by Amazon or other some media, which is data quality.

For years, organizations have been hearing about the need for good-quality data. For one thing, good-quality data is more reliable than bad-quality data. Just about every business wants to use analytics to make better business decisions, but not everyone is thinking about the quality of the data that is being relied upon to make such decisions. Data is also used to train AI systems, so the quality of that data should be top-of-mind. Sadly, in an HR context, bad data is the norm.

Kevin Parker

“If they’d asked us, I would have said starting with resumes is a bad idea,” said Kevin Parker, CEO of hiring intelligence company HireVue. “It will never work, particularly when you’re looking at resumes for training data. “

As if the poor quality of resume data wasn’t enough to derail Amazon’s project, add job descriptions. Job descriptions are often poorly written, so the likely result is a system that attempts to match attributes from one pool of poor quality data with another pool of poor-quality data.

Bias is a huge issue, regardless

Humans tend to be naturally biased creatures. Since humans have created and are still behind the creation of data, it only stands to reason that their biases will be reflected in the data. While there are ways of correcting for bias, it isn’t as simple as pressing a button. One must be able to identify the bias in the first place and should also understand the context of that bias.

“We think of resumes as a representation of the person, but let’s go to the person and get to the root of what we’re trying to do, and try to figure out if the person is a great match for this particular job. Are they empathetic? Are they great problem solvers? Are they great analytical thinkers? All of the things that define success in a job or role,” said HireVue’s Parker.

HireVue is building its own AI models that are correlated to performance in customer organizations.

“[The models are] validated. We do a lot of work to eliminate bias in the training data and we can prove it arithmetically,” said Parker. “The underlying flaw is don’t start with resumes because it won’t end well.”

HireVue looks at the data collected during the course of a 20 to 30-minute video interview. During that time, it’s able to collect tens of thousands of data points. Its system is purportedly capable of showing an arithmetic before and after, so if all successful people in a particular role are middle-aged white men but the same level of success is desired from a more diverse workforce, then what are the underlying competencies and work-related skills is the company seeking?

“By understanding the attributes of the best, middle and poor performers in an organization, an AI model can be built [that looks] for those attributes in a video interview so you can know almost in real-time if a candidate is a good candidate or not and respond to each in a different way,” said Parker.

Recruitment software and marketplace ScoutExchange analyzes the track record of individual recruiters to identify the types of biases they’ve exhibited over time, such as whether they hired more men than women or whether they tend to prefer candidates from certain colleges or universities over others.

Ken Lazarus

“There’s bias in all data and you need a strategy to deal with it or you’re going to end up results you don’t like and you won’t use [the system],” said Ken Lazarus, CEO of ScoutExchange. “The people at Amazon are pretty smart and pretty good at machine learning and recommendations, but it points out the real difficulty of trying to match humans without any track record. We look at a recruiter’s track record so we can remove bias. Everyone needs a strategy to do that or you’re not going to get anywhere.”

The three things to take away from Amazon’s situation are these:

1 – Despite all the hype about machine learning, it isn’t perfect. Even Amazon doesn’t get everything right all the time. No organization or individual does.

2 – Bias isn’t the sole domain of statisticians and data scientists. Business and IT leaders need to be concerned about it because bias can have very real business impacts as Amazon’s gaffe demonstrates.

3 – Data quality matters. Data quality is not considered as hot a topic as AI, but the two go hand-in-hand. Data is AI brain food.

[For more about data bias in AI, check out these articles.]

Lisa Morgan is a freelance writer who covers big data and BI for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit. Frequent areas of coverage include … View Full Bio

Originally Posted On: vice.com

High schoolers are weighing the benefits of blue-collar trades at a time when well-paying jobs—and no debt—are hard to pass up.

This story appears in VICE Magazine’s Power and Privilege Issue. Click HEREto subscribe.

On a recent Wednesday morning, about 20 students at Queens Technical High School marched into a supply closet and retrieved what looked, to an outsider, like silver suitcases. They sat back down at the classroom’s U-shaped table arrangement and opened what were in fact “advanced cable trainers,” kits containing the cables and wire cutters they’d be working with throughout their senior year. Meanwhile, their teacher, David Abreu, began to lecture them about what it’s like out “in industry”—the vocational school term for the proverbial “real world.”

“When you go out there, there’s no reason why anyone should be sitting on mommy’s couch, eating cereal, and watching cartoons or a telenovela,” he told the teens, who were mostly male. “There’s tons of construction, and there’s not enough people. So they’re hiring from outside of New York City. They’re getting people from the Midwest. I love the accents, but they don’t have enough of you.”

He asked the class if anyone could name the most common delay announcement on New York City’s notoriously beleaguered subway system. A hand with pink fingernails promptly shot into the air.

“We’re sorrrrrry,” a girl with curly hair and ripped jeans mimicked, before sticking out her tongue. The room erupted in laughter.

“Signal problems,” corrected the teacher, himself a graduate of Queens Tech. “They don’t have enough technicians to keep things working properly. What they need is you.”

Abreu is a Queens Tech graduate, and his son is now a student at the school as well.

Abreu was onto something. As the Brookings Institution noted in 2017, participation in career and technical education (CTE) has declined for several decades. That was in part because of a lack of funding and the fact that many states implemented more stringent academic requirements. However, the growing belief that everyone should obtain a college education also surely played a part. The National Center for Education Studies found that the number of CTE credits earned by American high school students declined by 14 percent between 1990 and 2009.

But the jobs are still there. NPR reported in April that the pressure to attend a four-year college remained so strong in American society that many high-paying jobs in the trades were currently sitting empty. Melissa Burg, the principal of Queens Tech, insisted that New York City’s Department of Education and some savvy parents had taken note of this dynamic, increasingly regarding a bachelor’s degree as the new high school diploma.

“I think those [trade] jobs go unfilled because skilled labor is looked down upon, even though those skilled labor people make more money than I do,” she explained. “I don’t know if people don’t want to work as physically hard as they used to, or if they see their families who’ve worked hard physically, or if those families are saying, ‘Don’t do what I did.’”

Meanwhile, in-state tuition and fees at public four-year schools have increased at an average rate of more than 3 percent above inflation each year in the past decade, according to data from the College Board. Experts say that’s partly a result of a sort of amenities arms race, in which schools use expensive construction projects to lure applicants. That cost—in combination with factors like increased demand and lack of state funding—is then passed down to the customer, in this case the student, and the situation has resulted in the average graduate walking away with almost $40,000 in debt.

For the students at Queens Tech, as for many young people across America graduating into what every newspaper and expert tells them is a wonderful economy, adult life is no longer about how much money they stand to make out of the gate. It’s about what size hole they might have to crawl out from just to break even. Perhaps that’s why there’s a renewed sense of energy surrounding CTE: Burg said that in the almost ten years she’s been principal—a timeline that roughly correlates with the last financial crisis—she consistently saw an uptick in applications. Brookings noted renewed interest nationwide as well.

Clive Belfield, a professor of economics at Queens College, said some young people might be wary of entering certain blue-collar industries because of the Uberization and outsourcing they’ve observed in their lifetimes. However, he noted that the trades may be less imperiled in New York than elsewhere in America, because unions are still a somewhat protective force, and suggested working for the MTA might be among the safest choices of all. Definitionally, those jobs can’t be sent to China.

Overall, he continued, a renewed interest in the trades might represent a natural market correction given how unaffordable college is, and how increasingly useless a diploma may seem to be. But in a world that pays a comically disproportionate amount of attention to Ivy League students and what they’re up to, even the most pragmatic 17-year-old may not pass up the chance at a four-year degree.

“It’s hard to think, when you’re that young and living in a world that’s obsessed with Harvard University, ‘This job is not very glamorous, but at least I’ll get to keep it,’” Belfield told me.

The advanced cable trainers (kits containing cables and wire cutters) let students practice on miniature versions of cable lines.

The decision to take on either stigma or debt—which carries its own form of stigma—is not an ideal one, and Mauricio Bustamante wrestled with it in his senior year. Now 20, he was in the enviable position of being offered both a union gig with the MTA and full tuition to a school upstate back in 2015. Although high school graduates had median weekly earnings of $718 in 2017, according to the US Bureau of Labor Statistics, the apprentice job paid $22 an hour to start, or $880 a week. That seemed like an enormous amount to the then-17-year-old kid who grew up in a single-parent home in Woodside, Queens. Then again, the prospect of being the first person in his family to attend college was undeniably appealing. There’s also the fact that the average college graduate took home a starting salary of $50,516 a year in 2017, or $971 a week, according tothe National Association of Colleges and Employers (NACE). That number was even higher—$1,271—for engineers.

Both Bustamante and his mother changed their opinions on what to do several times.

“And then just as I was about to decline my offer at the school, she changed her mind and said, ‘You have to go,’” he told me. “I ended up doing it for her, really.”

Bustamante, a former student of Abreu’s who’s now a junior at St. Lawrence University, where he’s double-majoring in math and economics, ultimately figured it was worth at least trying to land a job crunching numbers for a nonprofit, or calculating risk for an insurance company. But when I canvassed students currently finishing up the electrical installation track at his alma mater, they were less focused on the college-versus-job question than whether they wanted to work below ground or in a more traditional office setting.

Haw Wunna Zaw, a student at Queens Tech, says that even if a construction worker makes as much money as a doctor, they don’t earn as much respect in American society.

Haw Wunna Zaw, a 16-year-old born of immigrant parents, applied to roughly the same number of vocational and traditional high schools. His mom was a PhD candidate and his father went to military school in Burma, and, when we met, Haw said he hoped to work for the MTA after graduation while taking college classes at night. He added he’d received no pressure from his parents to choose university over going straight to work, something he attributed to the fact that, where they come from, there’s an immense pressure to go to college after high school. They just want him to be happy.

Still, he seemed to understand jobs in the trades had lost clout in American society. As union membership has declined, old-school patronage has broken down, and tech companies have disrupted industry after industry, fixing escalators for a living is less sexy than ever.

“If you’re a doctor, people admire you and you have the glory,” he told me. “If you’re a construction worker, you may get paid the same as a doctor, but you don’t look as good.”

Meanwhile, Brigitte Barcos, the 17-year-old who stuck out her tongue in class, originally applied with her best friend to Queens Tech, where the two planned on studying cosmetology together. The friend didn’t get in, and Barcos didn’t end up liking the track she was on. Then she found herself unexpectedly passionate about tinkering with circuits, planning to pursue electrical engineering any way she could. For her, that didn’t necessarily mean college and a degree that might help her become a supervisor for other people getting their hands dirty, rather than dirtying her own. Besides, she hadn’t bought into the once ironclad notion that college was a path to financial solvency.

“As union membership has declined, old-school patronage has broken down, and tech companies have disrupted industry after industry, fixing escalators for a living is less sexy than ever.”

“I feel like everyone has the expectation that you have to go to college to get more money, and that’s a lie,” she told me. “You waste more money to go to college than you get out of it.”

The only problem, Barcos explained, was that her parents didn’t think trades like electrical installation were appropriate for women. That mind-set is one Abreu said he’d had to contend with over the years with relative frequency, though he’d also seen immigrant parents cheer on daughters with 95 percent averages who decided they wanted to help fix the crumbling transit system.

Seventeen-year-old Brigitte Barcos originally attended Queens Tech for cosmetology before switching to the electrical installation track.

More prevalent, Abreu told me, was the unshakable conviction that college was the only answer, something he disagreed with his mother about decades ago. At one point, he had enrolled in a traditional four-year college, only to back out early after getting an offer to come back and apprentice at Queens Tech as part of another vocational program the school offered. His mother thought he had made a huge mistake—until she saw his first paycheck.

So when coaching students who are committed to taking jobs “in industry” but facing off with reluctant parents, money often amounts to Abreu’s best bargaining chip. In fact, he said, his kids stop at nothing to get those high-paying jobs. Part of that seemed to stem from the fact that Queens Tech has tended to be surrounded by trade unions, and students can see the incredibly long lines of people just waiting for a chance to apply. With the certifications they obtain as part of their high school curriculum, they can acquire what amounts to an express pass to jobs that thousands of people are visibly desperate for. That changes your thinking.

After class finished up for the day, Abreu told me about a group of former students who were dead-set on becoming bridge painters—a profession that pays around $95 an hour, and therefore remains highly competitive, even for trade-school VIPs like them.

“I said, ‘You know why they pay that much money, right?’” he recalled. “It’s a dangerous job. But there they were, out there in line the midnight before, standing out there together, huddled in the cold, just waiting for an application.”

Follow Allie Conti on Twitter.

Recognizing Everyone As A Student For Life