In the past few years, artificial intelligence has advanced so quickly that it now seems hardly a month goes by without a newsworthy AI breakthrough. In areas as wide-ranging as speech translation, medical diagnosis, and gameplay, we have seen computers outperform humans in startling ways.
This has sparked a discussion about how AI will impact employment. Some fear that as AI improves, it will supplant workers, creating an ever-growing pool of unemployable humans who cannot compete economically with machines.
This concern, while understandable, is unfounded. In fact, AI will be the greatest job engine the world has ever seen.
New Technology Isn’t a New Phenomenon
On the one hand, those who predict massive job loss from AI can be excused. It is easier to see existing jobs disrupted by new technology than to envision what new jobs the technology will enable.
But on the other hand, radical technological advances aren’t a new phenomenon. Technology has progressed nonstop for 250 years, and in the US unemployment has stayed between 5 to 10 percent for almost all that time, even when radical new technologies like steam power and electricity came on the scene.
But you don’t have to look back to steam, or even electricity. Just look at the internet. Go back 25 years, well within the memory of today’s pessimistic prognosticators, to 1993. The web browser Mosaic had just been released, and the phrase “surfing the web,” that most mixed of metaphors, was just a few months old.
If someone had asked you what would be the result of connecting a couple billion computers into a giant network with common protocols, you might have predicted that email would cause us to mail fewer letters, and the web might cause us to read fewer newspapers and perhaps even do our shopping online. If you were particularly farsighted, you might have speculated that travel agents and stockbrokers would be adversely affected by this technology. And based on those surmises, you might have thought the internet would destroy jobs.
But now we know what really happened. The obvious changes did occur. But a slew of unexpected changes happened as well. We got thousands of new companies worth trillions of dollars. We bettered the lot of virtually everyone on the planet touched by the technology. Dozens of new careers emerged, from web designer to data scientist to online marketer. The cost of starting a business with worldwide reach plummeted, and the cost of communicating with customers and leads went to nearly zero. Vast storehouses of information were made freely available and used by entrepreneurs around the globe to build new kinds of businesses.
But yes, we mail fewer letters and buy fewer newspapers.
The Rise of Artificial Intelligence
Then along came a new, even bigger technology: artificial intelligence. You hear the same refrain: “It will destroy jobs.”
Consider the ATM. If you had to point to a technology that looked as though it would replace people, the ATM might look like a good bet; it is, after all, an automated teller machine. And yet, there are more tellers now than when ATMs were widely released. How can this be? Simple: ATMs lowered the cost of opening bank branches, and banks responded by opening more, which required hiring more tellers.
In this manner, AI will create millions of jobs that are far beyond our ability to imagine. For instance, AI is becoming adept at language translation—and according to the US Bureau of Labor Statistics, demand for human translators is skyrocketing. Why? If the cost of basic translation drops to nearly zero, the cost of doing business with those who speak other languages falls. Thus, it emboldens companies to do more business overseas, creating more work for human translators. AI may do the simple translations, but humans are needed for the nuanced kind.
In fact, the BLS forecasts faster-than-average job growth in many occupations that AI is expected to impact: accountants, forensic scientists, geological technicians, technical writers, MRI operators, dietitians, financial specialists, web developers, loan officers, medical secretaries, and customer service representatives, to name a very few. These fields will not experience job growth in spite of AI, but through it.
But just as with the internet, the real gains in jobs will come from places where our imaginations cannot yet take us.
You may recall waking up one morning to the news that “47 percent of jobs will be lost to technology.”
That report by Carl Frey and Michael Osborne is a fine piece of work, but readers and the media distorted their 47 percent number. What the authors actually said is that some functions within 47 percent of jobs will be automated, not that 47 percent of jobs will disappear.
Frey and Osborne go on to rank occupations by “probability of computerization” and give the following jobs a 65 percent or higher probability: social science research assistants, atmospheric and space scientists, and pharmacy aides. So what does this mean? Social science professors will no longer have research assistants? Of course they will. They will just do different things because much of what they do today will be automated.
The intergovernmental Organization for Economic Co-operation and Development released a report of their own in 2016. This report, titled “The Risk of Automation for Jobs in OECD Countries,” applies a different “whole occupations” methodology and puts the share of jobs potentially lost to computerization at nine percent. That is normal churn for the economy.
But what of the skills gap? Will AI eliminate low-skilled workers and create high-skilled job opportunities? The relevant question is whether most people can do a job that’s just a little more complicated than the one they currently have. This is exactly what happened with the industrial revolution; farmers became factory workers, factory workers became factory managers, and so on.
Embracing AI in the Workplace
A January 2018 Accenture report titled “Reworking the Revolution” estimates that new applications of AI combined with human collaboration could boost employment worldwide as much as 10 percent by 2020.
Electricity changed the world, as did mechanical power, as did the assembly line. No one can reasonably claim that we would be better off without those technologies. Each of them bettered our lives, created jobs, and raised wages. AI will be bigger than electricity, bigger than mechanization, bigger than anything that has come before it.
This is how free economies work, and why we have never run out of jobs due to automation. There are not a fixed number of jobs that automation steals one by one, resulting in progressively more unemployment. There are as many jobs in the world as there are buyers and sellers of labor.
https://syned.org/wp-content/uploads/2019/01/artificial-intelligence-and-human-jobs_shutterstock_552979102-1068x601.jpg6011068synadmin/wp-content/uploads/2018/08/apple-icon.pngsynadmin2019-01-04 10:34:022019-01-04 10:34:06AI Will Create Millions More Jobs Than It Will Destroy. Here’s How
You’re driving along the highway when, suddenly, a person darts out across the busy road. There’s speeding traffic all around you, and you have a split second to make the decision: do you swerve to avoid the person and risk causing an accident?
Do you carry on and hope to miss them? Do you brake? How does your calculus change if, for example, there’s a baby strapped in the back seat?
In many ways, this is the classic “moral dilemma,” often called the trolley problem. It has a million perplexing variants, designed to expose human bias, but they all share the basics in common. You’re in a situation with life-or-death stakes, and no easy options, where the decision you make effectively prioritizes who lives and who dies.
A new paper from MIT published last week in Nature attempts to come up with a working solution to the trolley problem, crowdsourcing it from millions of volunteers. The experiment, launched in 2014, defied all expectations, receiving over 40 million responses from 233 countries, making it one of the largest moral surveys ever conducted.
A human might not consciously make these decisions. It’s hard to weigh up relevant ethical systems as your car veers off the road. But, in our world, decisions are increasingly made by algorithms, and computers just might be able to react faster than we can.
Hypothetical situations with self-driving cars are not the only moral decisions algorithms will have to make. Healthcare algorithms will choose who gets which treatment with limited resources. Automated drones will choose how much “collateral damage” to accept in military strikes.
Not All Morals Are Created Equal
Yet “solutions” to trolley problems are as varied as the problems themselves. How can machines make moral decisions when problems of morality are not universally agreed upon, and may have no solution? Who gets to choose right and wrong for the algorithm?
The crowd-sourcing approach adopted by the Moral Machine researchers is a pragmatic one. After all, for the public to accept self-driving cars, they must accept the moral framework behind their decisions. It’s no good if the ethicists or lawyers agree on a solution that’s unacceptable or inexplicable to ordinary drivers.
The results have the intriguing implication that moral priorities (and hence the types of algorithmic decisions that might be acceptable to people) vary depending on where you are in the world.
The researchers first acknowledge that it’s impossible to know the frequency or character of these situations in real life. Those involved in accidents often can’t tell us exactly what happened, and the range of possible situations defies easy classification. So, to make the problem tractable, they break it down into simplified scenarios, looking for universal moral rules.
As you take the survey, you’re presented with thirteen questions that ask for a simple yes or no choice, trying to narrow down responses to nine factors.
Should the car swerve into the other lane, or should it keep going? Should you preserve the young people versus the old people? Women over men? Pets over humans? Should you try to spare the most lives possible, or is one baby “worth” two elderly people? Spare the passengers in the car versus the pedestrians? Those who are crossing the road legally versus illegally? Should you spare people who are more physically fit? What about those with higher social status, like doctors or businessmen?
In this harsh, hypothetical world, somebody’s got to die, and you’ll find yourself answering each of these questions—with varying degrees of enthusiasm. Yet making these decisions exposes deeply-ingrained cultural norms and biases.
Crunching through the vast dataset the researchers obtained as a result of the survey yields universal rules as well as fascinating exceptions. The three most dominant factors, averaged across the entire population, were that everyone preferred to spare more lives than fewer, humans over pets, and the young over the elderly.
You might agree with these broad strokes, but looking further yields some pretty disturbing moral conclusions. More respondents chose to save a criminal than a cat, but fractionally preferred to save a dog over a criminal. As a global average, being old is judged more harshly than being homeless—yet homeless people were spared less often than the obese.
These rules didn’t apply universally: respondents from France, the United Kingdom, and the US had the greatest preference for youth, while respondents from China and Taiwan were more willing to spare the elderly. Respondents from Japan displayed a strong preference for saving pedestrians over passengers in the car, while respondents from China tended to choose to save passengers over pedestrians.
The researchers found that they could cluster responses by country into three groups: “Western,” predominantly North America and Europe, where they argued morality was predominantly influenced by Christianity; “Eastern,” consisting of Japan, Taiwan, and Middle Eastern countries influenced by Confucianism and Islam, respectively; and “Southern” countries including Central and South America, alongside those with a strong French cultural influence. In the Southern cluster there were stronger preferences for sparing women and the fit than anywhere else. In the Eastern cluster, the bias towards saving young people was least powerful.
Filtering by the various attributes of the respondent yields endless interesting tidbits. “Very religious” respondents are fractionally more likely to save humans over animals, but both religious and irreligious respondents display roughly equal preference for saving those of high social status vs. those of low social status, even though (one might argue) it contradicts some religious doctrines. Both men and women prefer to save women, on average—but men are ever-so-slightly less inclined to do so.
Questions With No Answer
No one is arguing that this study somehow “resolves” these weighty moral questions. The authors of the study note that crowdsourcing the data online introduces a sample bias. The respondents skewed young, skewed male, and skewed well-educated; in other words, they looked like the kind of people who might spend 20 minutes online filling out a survey about morality for self-driving cars from MIT.
Even with a vast sample size, the number of questions the researchers posed were limited. Getting nine different variables into the mix was hard enough—it required making the decisions simple and clear-cut. What happens if, as you might expect in reality, the risks were different depending on the decision you took? What if the algorithm were able to calculate, for example, that you had only a 50 percent chance of killing pedestrians given the speed you’re going?
Edmond Awad, one of the authors of the study, expressed caution about over-interpreting the results. “It seems concerning that people found it okay to a significant degree to spare higher status over lower status,” he told MIT Technology Review. “It’s important to say, ‘Hey, we could quantify that’ instead of saying, ‘Oh, maybe we should use that. The discussion should move to risk analysis—about who is at more risk or less risk—instead of saying who’s going to die or not, and also about how bias is happening.”
Perhaps the most important result of the study is the discussion it has generated. As algorithms start to make more and more important decisions, affecting people’s lives, it’s crucial that we have a robust discussion of AI ethics. Designing an “artificial conscience” should be a process with input from everybody. While there may not always be easy answers, it’s surely better to understand, discuss, and attempt to agree on the moral framework for these algorithms, rather than allowing the algorithms to shape the world with no human oversight.
About Author: Thomas Hornigold is a physics student at the University of Oxford. When he’s not geeking out about the Universe, he hosts a podcast, Physical Attraction, which explains physics – one chat-up line at a time.
https://syned.org/wp-content/uploads/2018/11/autonomous-car-hud-head-display-self-driving_shutterstock_1190801794-1068x601.jpg6011068synadmin/wp-content/uploads/2018/08/apple-icon.pngsynadmin2018-11-25 12:10:502018-11-20 07:25:01Building a Moral Machine: Who Decides the Ethics of Self-Driving Cars?
The explosion of data in consumer and business spaces can place our productivity at risk. There are ways you can resist drowning in data.
The pace of data creation steadily increases as technology becomes more and more ingrained in people’s lives and continues to evolve.
According to Forbes.com last May, “there are 2.5 quintillion bytes of data created each day at our current pace, but that pace is only accelerating with the growth of the Internet of Things (IoT). Over the last two years alone 90 percent of the data in the world was generated.”
While technology should make our lives easier, the information it provides can negatively impact our mental function by overwhelming us with too much input.
However, don’t confuse cognitive overload with work overload. Whereby work overload is simply having too much to do and not enough time to complete it, cognitive overload refers to having too much information to process at once.
Fouad ElNaggar, co-founder and CEO of Sapho, an employee experience software provider based in San Bruno, Calif., is passionate about cognitive overload. Together we developed some tips for workers on how to fix the problem.
1. Close/shut off distracting applications
The irony of productivity applications is that they can actually make you less productive. Microsoft Office includes Outlook, an email application, which can “helpfully” notify you when new email arrives.
Sadly, this can also contribute to your information overload if you’re in the middle of a task, and you switch to Outlook to read an email. You might even forget about the current task you’re working on. Instant messaging apps, or frankly, anything that dings or pops up an alert are just as distracting. When trying to stay focused on a task, close or shut off any applications which could serve as potential distractions. Oh, and silence your phone, too.
2. Switch off push notifications
If you can’t close a potentially distracting application because you need it available, you can still quiet it down. Between Slack, Gchat, calendar, email and text messages, it probably seems like those tiny dialog boxes pop up on your screen all day long. Take a few minutes to evaluate which push notifications actually help you get work done, and turn off the rest.
Constantly checking and responding to email is a major time drain. Set aside two times a day to answer emails, and do not check it any other time. Put your phone on “Do Not Disturb,” and make it a point to not let notifications interrupt you during that time.
4. Stay off personal social media/news sites/other temptations
It’s easy and tempting to check social media, or your favorite news outlet while working, especially if you’re waiting for a task to finish before you proceed (such as rebooting a server or uploading a file). However, this just puts more data into your current memory banks, so to speak, so that instead of thinking about that server patching project now you’re also thinking about the NFL draft or how many people “like” your funny Facebook meme. Save social media for lunch time or after work. It’ll be more meaningful, and you can keep your work and recreation separate, as it should be.
5. Utilize minimalism
I keep a very minimalistic workspace: a family picture, a Rick Grimes (from “The Walking Dead,” which contains many parallels to IT life) keychain figure, and a calendar. No fancy furniture, no posters, no inspiring slogans, and no clutter. This helps me stay oriented to what I need to do without the sensory overload.
I also apply the same principles to my computer: I only keep programs running which I need, and even close unnecessary browser tabs, SSH sessions, and Windows explorer windows so that I’m only concentrating at the task at hand.
You may not have a choice, but avoiding to multitask is one of the best things you can do to keep your brain from being overwhelmed. Dividing your attention into four or five parallel tasks is a sure-fire way to ensure that those tasks take longer or end up being completed less efficiently than if you accomplished these things one at at time. Worse, it’s all too easy to drop tasks entirely as your attention span shifts, resulting in uncompleted work.
7. Utilize documentation
Document your to-do lists, operational processes, and daily procedures you need to follow (building a new server, for instance) so that you don’t rely on memory and can quickly handle tasks—or better yet—refer them to someone else. Anytime I discover how something works or what I can improve upon I update the related electronic documentation so I don’t have to comb through old emails, leaf through handwritten notes, or worse, ask coworkers or fellow employees to fill in missing details that I should have recorded.
8. Take notes as you go
In addition to relying upon established documentation to make your efforts more productive, take notes during difficult operations such as a server recovery effort or network troubleshooting endeavor. It helps to serve as a “brain dump” of your activities so that you can purge them from memory and refer to this information later, if needed.
Believe me, there’s nothing more challenging then sorting through a complex series of tasks during an outage post-mortem to recall what you did to fix the problem. A written record can save your brain.
This should be a no-brainer, yet too many people consider themselves too busy to take a break, when doing so allows you to step away from work and hit the “pause” button. It’s not just about relaxing your brain so that you return to work with a more productive mindset, but a quick walk around the building might be beneficial in allowing you to think and come up with new ideas or solutions to problems you’re facing, thereby eliminating one more area of information overload.
10. Avoid open space seating areas
I’ve written about some of the problems of the infamous (and unfortunately common) open-seating plan in companies. In a nutshell, having no privacy and sitting in close physical and audial proximity even to individuals considered close friends strains working relationships and breeds frustration.
Avoiding cognitive overload isn’t just about not taking on or dealing with too much at once, but it’s also about not letting other people’s activities intrude upon your own productivity. Whether it’s an annoying personal phone call, playing music or even just chewing loudly, other people’s nearby activity can be a source of unwanted details, which reduces your capacity to do your job. You may not have a choice about sitting in an assigned open space seat, but take advantage of opportunities such as working from home, using an available conference room, or moving to an empty part of the office when you really need to focus.
11. Break projects down into chunks
Facing the entirety of a complex project is a daunting mission. It’s better and more effective to break a project down into subcomponents, and then focus on these separately, one at a time.
For instance, say you want to migrate users, computers, and services from one Active Directory domain to another. This would be overwhelming to focus on at once, so the best way to proceed is to divide the project into tasks. One task could be migrating user accounts and permissions. The next task could be migrating computer accounts, and the task after that could be addressing DNS changes, and so on. Plan it out in advance, and then tackle it piece-by-piece.
12. Control your calendar
Don’t let colleagues fill in your day with meaningless meetings. Have a conversation with your coworkers about which meetings are absolutely necessary for you to participate in and skip the rest. If you are a manager or leader, encourage your employees to schedule in-person meetings only when they are absolutely necessary.
13. Don’t take your phone into your bedroom
You spend enough time on screens during the day. The simple act of charging your phone in another room gives you time to really disconnect. It also gives you a chance to wake up refreshed, and think about the day ahead before reactively reaching for your device and checking social media or email.
ElNaggar and I also thought of a couple of tips for business leaders on ways to reduce cognitive overload for their team. These tips include:
14. Invest in the right technology
Take the time to learn what processes or tools are pain points for your employees’ productivity. Research which solutions can automate certain tasks or limit daily distractions and implement them across your workforce.
15. Embrace employee-centric workflows
ElNaggar says that leaders “embrace the idea that employee experience matters, which will have a ripple effect in their organization.” He recommends that leaders start to develop more employee-centric workflows that reduce interruptions for their employees to help them focus on priorities and accomplish more work.
An example of an employee-centric workflow would be a business application or web portal, which gives employees a single, actionable view into all of their systems and breaks down complex processes into single-purpose, streamlined workflows, allowing employees to be more productive.
“Without leadership teams championing an employee-centric mindset, nothing will really change in the mid and lower levels of a company. Business leaders must start thinking about the impact their employees’ digital experience has on their work performance and overall satisfaction, and support the idea that investing in employee experience will drive employee engagement and productivity,” ElNaggar concluded.
There might be a better, knowledge management-based, way to conduct the US Census, according to a group of university researchers.
Consider for a minute whether the best way to collect important data is to mail 125 million (or so) paper forms, often to “Current Occupant,” and to then follow up with humans carrying clipboards and ringing doorbells. You probably would conclude that it’s a lot of work and a process likely to result in the collection of incomplete or inaccurate data.
Then, you’ll update that data only every 10 years: Lots can change in 10 years. Yet, you will use the collected data to determine things like how your congressional representatives will be elected, how federal funds are allocated to local schools, even where new roads will be built and public transportation offered.
Is there a better way to do the US Census than how it has been done for 228 years?
A group of university researchers believes that the data gathered and analyzed by the US Census Bureau can be found in existing sources without sending any forms or people out into the field. Actually, the researchers argue that the government can collect much more data and more timely data using sources like tax returns, state websites, even Google search data.
“The costs of a census are pretty large, $17.5 billion. That’s based on these paper forms. That’s really the driver behind our research,” says Murray Jennex, a professor focused on knowledge management at San Diego State University. “The Census Bureau has spent a lot of money for technology to analyze data, but very little on collecting data,” he added during a recent interview.
Jennex was part of the team that included San Diego State professors James Kelly (lead author), Kaveh Abhari and Eric Frost, along with Alexandra Durcikova of the University of Oklahoma. Together, they authored a research paper titled, “Data in the Wild: A KM Approach to doing a Census Without Asking Anyone and the Issue of Privacy.” That paper will be presented in January at the Hawaii International Conference on System Sciences.
While the cost of paper census surveys — including the one scheduled for 2020 — is a key consideration in the team’s research, there are several other major factors.
One such consideration is the growing abundance of data in the public sphere, such as that collected by many federal — the Internal Revenue Service, Department of Education, Department of Labor for example — state and municipal agencies, and academic research organizations. Add in the trend data that can be gleaned from search engines such as Google, public utility records, and commercial data services such as the major consumer credit bureaus. Together they represent a wealth of data, highlighting how many people live where, areas where poverty is most challenging, ethnic trends, and the need for elderly, healthcare, and educational support.
In addition, that data can be updated and analyzed in what Jennex calls “not quite real time.” “The data we would be using could be refreshed every year, and could be used to guide public policies,” he said.
Murray Jennex, San Diego State
The limiting factor, however, is that of privacy, how the Census Bureau could protect personally identifiable information (PII). Jennex notes that data can be anonymized by stripping off PII, which would be effective protection when the data analysis covers large areas, even five-digit ZIP codes. But it might not take a lot of work for someone to identify unique individuals or families at a neighborhood level, particularly those who stand out in the neighborhood by income, size of household, or ethnic background.
So, protections would have to be put in place.
Another hurdle that the researchers acknowledge is that “government is actually very bad at sharing data.” For decades, government agencies have tended to keep their data siloed, despite attempts by some government leaders to move to an open data approach. Jennex cited the IRS as a particularly rich data source, not only for basic financial data but also for insight into household size, health issues, employment trends, and even transportation planning as more Americans work out of home offices.
Existing data, such as that from the IRS, actually can be more accurate than that currently collected through census forms — known as the American Community Survey. In their paper the researchers cited how “household income” can be misleading, depending on whether household members are married or unrelated. Also, the income questions focus on what someone made in a single year, not factoring in that the individual year’s earnings were significantly higher or lower than what they earn in a more typical year.
However, don’t expect the paper questionnaire to go away in the year and a half before you expect to find one in your mail. The changes that the researchers suggest are much further down the road.
Jim Connolly is a versatile and experienced technology journalist who has reported on IT trends for more than two decades. As Executive Managing Editor of InformationWeek, he oversees the day-to-day planning and editing on the site. Most recently he has been editor of UBM’s … View Full Bio
https://syned.org/wp-content/uploads/2018/10/people-magnifying-glass-pixabay.jpg217489synadmin/wp-content/uploads/2018/08/apple-icon.pngsynadmin2018-11-20 00:55:122018-11-20 07:24:13A Data-Centric Approach to the US Census
When hiring gets tough, IT leaders get strategic. Here’s how successful organizations seize the experts their competitors’ only wish they could land.
The technology industry’s unemployment rate is well below the national average, forcing companies to compete aggressively for top talent. When presented with a range of recruitment strategies by a recent Robert Half Technology questionnaire — including using recruiters, providing job flexibility and offering more pay — most IT decision makers said they are likely to try all approaches in order to land the best job candidates for their teams.
“We’re currently in a very competitive hiring market,” noted Ryan Sutton, district president for Robert Half Technology. “Employers want to hire the best talent to help keep their organization’s information safe, but so do a lot of other companies.”
Robert Half’s research finds that software development and data analytics experts are the most challenging to hire. Many other talents are scarce, too. “Some of the most in-demand skills right now include cloud security, security engineering, software engineering, DevOps, business intelligence and big data, as well as expertise in Java full-stack, ReactJS and AngularJS,” Sutton said.
Finding qualified job candidates typically requires using a combination of strategies. But it’s also important to be able to move quickly. “At the core of the labor market now is a demand for speed and efficiency in the hiring process, but don’t confuse an expeditious process with a hastily made decision,” Sutton warned. “Some smart options would be to work with a specialized recruiter who knows your local market well; increasing the pay and benefits package to better attract a top candidate; and losing some of the skills requirements on your job description that aren’t must-haves to widen your talent pool.” He also reminded hiring managers to not underestimate the power of networking. “Let your contacts know you’re looking to hire for a certain position.”
Look beyond the typical sources, suggested Art Langer, a professor and director of the Center for Technology Management at Columbia University and founder and chairman of Workforce Opportunity Services (WOS), a nonprofit organization that connects underserved and veteran populations with IT jobs. “There is a large pool of untapped talent from underserved communities that companies overlook,” he explained. Businesses are now competing in a global market. “New technology allows us to connect with colleagues and potential partners around the world as easily as with our neighbors,” Langer said. “Companies hoping to expand overseas can benefit from employees who speak multiple languages.”
Companies need to explore different models of employment if they want access to the best and the brightest job candidates, observed Nick Hamm, CEO of 10K Advisors, a Salesforce consulting firm. “Some of the most talented professionals are choosing to leave full-time employment to pursue freelancing careers or start their own small consulting companies as a way to gain more balance or reduce commute times,” he advised. “If companies want access to these individuals, they’ll need the right processes and mindset in place to incorporate contract employees into core teams.” Using a talent broker to find the right experts, vet them and apply them inside an organization to solve business problems can alleviate many of the challenges people may now have tapping into the gig economy, Hamm added.
John Samuel, CIO, of Computer Generated Solutions, a business applications, enterprise learning and outsourcing services company, advised building some flexibility into job descriptions and requirements. “In this tight job market, a good way is to find candidates with the right attitude and a solid foundation and then train them in areas where they lack experience,” he said. Like Sutton, Samuel believes that many job descriptions are unrealistic, listing many requirements that aren’t core to the job’s role. “Rather than limiting your potential pool of candidates, simplify the job description to include your core requirements to entice applicants to fill open roles,” Samuel recommended.
Mike Weast, regional IT vice president at staffing firm Addison Group, urged hiring managers not to rely on software searches, no matter how intuitive they may claim to be, to uncover qualified job candidates. “There’s a lot of talk about using AI to find qualified candidates, but recruiters are needed to bridge the AI gap,” he claimed. “AI doesn’t qualify a candidate for showing up on time, having a strong handshake or making eye contact when communicating.”
Training current employees to meet the requirements of a vacant position is an often-overlooked method of acquiring experts. “It always makes sense to give existing employees the opportunity to expand their knowledge base and transition into vacant positions,” explained Lori Brock, head of innovation, Americas, for OSRAM, a multinational lighting manufacturer headquartered in Munich. “The roles within IT are merging with the traditional R&D functions as well as with roles in manufacturing, procurement, sales, marketing and more,” she added. “We can no longer consider jobs in IT fields as belonging to an IT silo within any organization.”
It’s important to pounce quickly when finding a skilled, qualified job candidate. “Now is certainly not the time to be slow to hire,” Sutton said. “It’s a candidate’s market and they are well aware of the opportunities available to them.”
For more on IT hiring and management check out these recent articles.
John Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic … View Full Bio
https://syned.org/wp-content/uploads/2018/10/hiring_shutterstock.jpg326489synadmin/wp-content/uploads/2018/08/apple-icon.pngsynadmin2018-11-08 13:47:472018-11-03 12:06:57Bridging the IT Talent Gap: Find Scarce Experts
Three jobs completely new to the IT industry will be data trash engineer, virtual identity defender, and voice UX designer, according to Cognizant.
With technology flooding the enterprise, many people fear the emergence of tech will take over their jobs. However, tech like artificial intelligence (AI) and machine learning will actually create more jobs for humans, according to a recent Cognizant report. The report outlines 21 “plausible and futuristic” jobs that will surface in the next decade.
The 21 jobs follow three major underlying themes: Ethical behaviors, security and safety, and dreams, said the report. These themes come from humans’ deeper aspirations for the future of the enterprise and daily life. Humans want machines to be ethical; humans want to feel safe in a technologically-fueled future; and humans always dreamt of a futuristic world, which is coming to fruition, according to the report.
Some of the jobs on Cognizant’s list could spark life-long careers, and some positions might be more fleeting, said the report. Here are the 21 jobs of the future:
Cyber attack agent
Voice UX designer
Smart home design manager
Algorithm bias auditor
Virtual identity defender
Cyber calamity forecaster
Head of machine personality design
Data trash engineer
Head of business behavior
Juvenile cybercrime rehabilitation counselor
Esports arena builder
VR arcade manager
Vertical farm consultant
Machine risk officer
Flying car developer
Haptic interface programmer
Subscription management specialist
Chief purpose planner
Three of the positions would be completely new in the IT world: Data trash engineer, virtual identity defender, and voice UX designer. A data trash engineer would be responsible for using unused data in an organization to find hidden insights, said the report; a virtual identity defender would lead a team to make a company’s business goal a reality; and a voice UX designer will use diagnostic tools, algorithms, and more to create the perfect voice assistant, said the report.
https://syned.org/wp-content/uploads/2018/10/new-jobs.jpg578770synadmin/wp-content/uploads/2018/08/apple-icon.pngsynadmin2018-11-03 13:52:132018-10-29 09:54:5621 new tech jobs that will define the next decade of IT work
Some skills are considered too “small” or specific to become a degree program and aren’t often listed on a student’s academic transcript. Yet, it’s a collection of these very skills that employers know are a big deal in the rapidly-changing 21st century workforce.
This is where badges come in. These digital icons represent achievements or skills in a certain area or subject matter. A form of ‘micro-credentialing,’ badges allow students to break down their educational experience – competency by competency – and tell the complete story of their educational journey to potential employers.
Today, badges are a rising trend in the rapidly changing world of higher education. In fact, according to a 2016 survey by the University Professional and Continuing Education Association, one in five colleges has issued a digital badge.
“They continue to tell us that job candidates don’t have the skills they need,” Pestana said. “Employers are looking for people who not only have a deep knowledge of a specific subject matter, but also a wide array of other skills that allow them to work across a variety of other subject areas.”
In an attempt to begin to close this gap and give students from all majors and disciplines the opportunity to build the skills that matter most in the 21st century – and still graduate in four years – Pestana and Fonseca began working on building a badge program at FIU.
They started with a subject area that has major implications for all industries and sectors: cybersecurity.
“Hospitality, healthcare, government, law, business – there isn’t an industry that isn’t susceptible to cyberattacks,” Pestana said. “These badges give the basic knowledge everyone needs to know, because anyone can be targeted by a cyberattack and have their personal information compromised.”
Collaborating across the university, Pestana and Fonseca brought in expertise from FIU’s Division of Information Technology, College of Business, College of Engineering & Computing, College of Law and StartUp FIUto create six badges. They are focused on different areas related to cybersecurity, including the Internet of Things, blockchain, cryptocurrencies and cybersecurity policy and law.
To earn a badge, students attend a Saturday workshop, which includes a lecture and active learning exercise. If students earn all six badges, they will also earn a certificate in cybersecurity fundamentals.
Cybersecurity was a natural place to begin offering badges.
The cybersecurity badges are just the beginning of a broader initiative to bring more 21st century workforce competencies to FIU.
A special interdisciplinary committee led by Senior Vice President for Academic and Student Affairs Elizabeth Bejar – and which includes members from academic and student-services units across the institution – will be working closely with local industry partners to explore bringing new badge programs to the university.
“FIU is always looking toward the future – that’s who we are,” Bejar said. “We’re here to educate lifelong learners and ensure they have the relevant, just-in-time skills that put them at a competitive advantage in our 21st century workforce.”
https://syned.org/wp-content/uploads/2018/10/Cybersecurity-Certificate-Program.jpg450700synadmin/wp-content/uploads/2018/08/apple-icon.pngsynadmin2018-10-30 13:51:432018-10-28 10:52:22Tell Your Complete Educational Story to Potential Employers
No degree means diminished opportunities, study finds
Millions of Californians who began their college education but never finished deserve special support and policy changes to help get them across the finish line later in life, a new report urges.
The study from the non-partisan California Competes organization estimates that 4 million Californians, ages 25 to 64, earned some college credits at various times but no associate or bachelor’s degrees and are not in school now. As a result, their employment and financial prospects have suffered and they face “diminishing opportunities in labor markets that increasingly rely on workers with degrees,” said the report entitled “Back to College: California’s Imperative to Re-Engage Adults.”
The report found that those adults with some college but no degree are significantly less likely to earn more than $75,000 a year compared to those who have at least an associate degree from a community college. Only 14 percent of those who didn’t finish their degrees earn in that upper income bracket, compared to 36 percent of those who have degrees (and 5 percent of those with just high school or less).
Not surprisingly, fewer of those adults who have some college credits but no diploma own homes and have full health insurance compared to graduates. And other research shows that non-completers have higher default rates on college loans, with unhappy consequences.
“We’ve already invested in folks who haven’t crossed the finish line. So our argument is that it makes sense to help them get across the finish line to benefit the broader California economy and to boost their individual prosperity,” Lande Ajose, executive director of California Competes, said in an interview. That Oakland-based organization analyzes ways to improve higher education in the state and how such reforms can aid the economy. Ajose is also chairwoman of the California Student Aid Commission, which administers Cal Grants.
The study showed ethnic disparities for college completion among California adults between ages 25 and 64, with higher rates for whites and Asians than for Latinos and blacks. Sixty two percent of Asians of that age had earned a degree, compared to 53 percent of whites, 34 percent of blacks and 18 percent of Latinos.
However, black adults showed the highest rate (28 percent) of their ethnic group who started but did not finish a degree, followed by whites (23 percent), Latinos (17 percent) and Asians (13 percent).
Among the roadblocks facing adults who want to return to college are limitations on financial aid that don’t affect most traditional age students, noted the report.
For example, federal Pell Grants are available for only 12 semesters over a person’s life and many of these adults are likely to have already used that allotment up years ago. Because of qualification rules and limits on expenditures, state-funded Cal Grants are very difficult to obtain for people who are older than 28 and several years out of high school. State officials are looking at ways to improve Cal Grants, including making them more available to people who attend community college years after high school.
“The inadequate financial aid options available to returning adults exacerbate the economic trends” that hurt the earning potential of people without degrees, the report said. In addition those people face personal and scheduling problems juggling work and family issues with their studies if they want to complete their degrees.
In addition, the report described poor coordination among California’s higher education systems and resulting “structural barriers that impede adults’ abilities to return to school.” Those include difficult access to academic transcripts and older data among different colleges and universities if an adult started at one or two campuses and seeks to finish at another, it said.
While describing problems, the report does not offer specific suggestions for improvements. California Competes officials said they expect a second report to do so by year’s end.
Adults without college degrees or certificates are at the center of a much-discussed effort in California. State leaders hope that the opening of a new on-line community college late next year will offer training and extra education for skilled jobs in fast growing industries. Those credentials are intended mainly to be completed in a year or less.
However, most adults who want to finish the more traditional associate or bachelor’s degrees still must attend the state’s other 114 community colleges or a four-year university. Adult students currently can take some online courses offered at those schools.
However, Ajose said college campuses should make their class schedules and other services more flexible to serve older students.
Meanwhile, a separate new report shows that students who took out federal student loans for college but never finished degrees default at high rates and face many problems as a result. Twenty three percent of borrowers who started college in 2003-04 defaulted within 12 years compared to 11 percent of those who completed, according to a policy brief by the The Institute for College Access and Success (TICAS).
Defaulters face “stark and immediate consequences” that could include fines, wage garnishment, lost job opportunities and suspended driver’s and professional licences, said the report entitled “The Self-Defeating Consequences of Student Loan Default.” TICAS, a non-partisan research and policy group with offices in Oakland and Washington, D.C., called for reforms that would lift some of the most burdensome penalties and make easier to enroll in income-based repayment plans.
https://syned.org/wp-content/uploads/2018/10/IMG_0098-1024x627.jpg6271024synadmin/wp-content/uploads/2018/08/apple-icon.pngsynadmin2018-10-28 10:51:302018-10-28 10:51:30Millions of Californians Who Started College Could Use Help to Finish It
Online versions of college courses are attracting hundreds of thousands of students, millions of dollars in funding, and accolades from university administrators. Is this a fad, or is higher education about to get the overhaul it needs?
A hundred years ago, higher education seemed on the verge of a technological revolution. The spread of a powerful new communication network—the modern postal system—had made it possible for universities to distribute their lessons beyond the bounds of their campuses. Anyone with a mailbox could enroll in a class. Frederick Jackson Turner, the famed University of Wisconsin historian, wrote that the “machinery” of distance learning would carry “irrigating streams of education into the arid regions” of the country. Sensing a historic opportunity to reach new students and garner new revenues, schools rushed to set up correspondence divisions. By the 1920s, postal courses had become a full-blown mania. Four times as many people were taking them as were enrolled in all the nation’s colleges and universities combined.
The hopes for this early form of distance learning went well beyond broader access. Many educators believed that correspondence courses would be better than traditional on-campus instruction because assignments and assessments could be tailored specifically to each student. The University of Chicago’s Home-Study Department, one of the nation’s largest, told prospective enrollees that they would “receive individual personal attention,” delivered “according to any personal schedule and in any place where postal service is available.” The department’s director claimed that correspondence study offered students an intimate “tutorial relationship” that “takes into account individual differences in learning.” The education, he said, would prove superior to that delivered in “the crowded classroom of the ordinary American University.”
We’ve been hearing strikingly similar claims today. Another powerful communication network—the Internet—is again raising hopes of a revolution in higher education. This fall, many of the country’s leading universities, including MIT, Harvard, Stanford, and Princeton, are offering free classes over the Net, and more than a million people around the world have signed up to take them. These “massive open online courses,” or MOOCs, are earning praise for bringing outstanding college teaching to multitudes of students who otherwise wouldn’t have access to it, including those in remote places and those in the middle of their careers. The online classes are also being promoted as a way to bolster the quality and productivity of teaching in general—for students on campus as well as off. Former U.S. secretary of education William Bennett has written that he senses “an Athens-like renaissance” in the making. Stanford president John Hennessy told the New Yorker he sees “a tsunami coming.”
The excitement over MOOCs comes at a time of growing dissatisfaction with the state of college education. The average price tag for a bachelor’s degree has shot up to more than $100,000. Spending four years on campus often leaves young people or their parents weighed down with big debts, a burden not only on their personal finances but on the overall economy. And many people worry that even as the cost of higher education has risen, its quality has fallen. Dropout rates are often high, particularly at public colleges, and many graduates display little evidence that college improved their critical-thinking skills. Close to 60 percent of Americans believe that the country’s colleges and universities are failing to provide students with “good value for the money they and their families spend,” according to a 2011 survey by the Pew Research Center. Proponents of MOOCs say the efficiency and flexibility of online instruction will offer a timely remedy.
But not everyone is enthusiastic. The online classes, some educators fear, will at best prove a distraction to college administrators; at worst, they will end up diminishing the quality of on-campus education. Critics point to the earlier correspondence-course mania as a cautionary tale. Even as universities rushed to expand their home-study programs in the 1920s, investigations revealed that the quality of the instruction fell short of the levels promised and that only a tiny fraction of enrollees actually completed the courses. In a lecture at Oxford in 1928, the eminent American educator Abraham Flexner delivered a withering indictment of correspondence study, claiming that it promoted “participation” at the expense of educational rigor. By the 1930s, once-eager faculty and administrators had lost interest in teaching by mail. The craze fizzled.
Is it different this time? Has technology at last advanced to the point where the revolutionary promise of distance learning can be fulfilled? We don’t yet know; the fervor surrounding MOOCs makes it easy to forget that they’re still in their infancy. But even at this early juncture, the strengths and weaknesses of this radically new form of education are coming into focus.
Rise of the MOOCs
“I had no clue what I was doing,” Sebastian Thrun says with a chuckle, as he recalls his decision last year to offer Stanford’s Introduction to Artificial Intelligence course free online. The 45-year-old robotics expert had a hunch that the class, which typically enrolls a couple of hundred undergraduates, would prove a draw on the Net. After all, he and his co-professor, Peter Norvig, were both Silicon Valley stars, holding top research posts at Google in addition to teaching at Stanford. But while Thrun imagined that enrollment might reach 10,000 students, the actual number turned out to be more than an order of magnitude higher. When the class began, in October 2011, some 160,000 people had signed up.
The experience changed Thrun’s life. Declaring “I can’t teach at Stanford again,” he announced in January that he was joining two other roboticists to launch an ambitious educational startup called Udacity. The venture, which bills itself as a “21st-century university,” is paying professors from such schools as Rutgers and the University of Virginia to give open courses on the Net, using the technology originally developed for the AI class. Most of the 14 classes Udacity offers fall into the domains of computer science and mathematics, and Thrun says it will concentrate on such fields for now. But his ambitions are hardly narrow: he sees the traditional university degree as an outdated artifact and believes Udacity will provide a new form of lifelong education better suited to the modern labor market.
Udacity is just one of several companies looking to capitalize on the burgeoning enthusiasm for MOOCs. In April, two of Thrun’s colleagues in Stanford’s computer science department, Daphne Koller and Andrew Ng, rolled out a similar startup called Coursera. Like Udacity, Coursera is a for-profit business backed with millions of dollars in venture capital. Unlike Udacity, Coursera is working in concert with big universities. Where Thrun wants to develop an alternative to a traditional university, Koller and Ng are looking to build a system that established schools can use to deliver their own classes over the Net. Coursera’s original partners included not only Stanford but Princeton, Penn, and the University of Michigan, and this summer the company announced affiliations with 29 more schools. It already has about 200 classes on offer, in fields ranging from statistics to sociology.
On the other side of the country, MIT and Harvard joined forces in May to form edX, a nonprofit that is also offering tuition-free online classes to all comers. Bankrolled with $30 million from each school, edX is using an open-source teaching platform developed at MIT. It includes video lessons and discussion forums similar to those offered by its for-profit rivals, but it also incorporates virtual laboratories where students can carry out simulated experiments. This past summer, the University of California at Berkeley joined edX, and in September the program debuted its first seven classes, mainly in math and engineering. Overseeing the launch of edX is Anant Agarwal, the former director of MIT’s Computer Science and Artificial Intelligence Laboratory.
The leaders of Udacity, Coursera, and edX have not limited their aspirations to enhancing distance learning. They believe that online instruction will become a cornerstone of the college experience for on-campus students as well. The merging of virtual classrooms with real classrooms, they say, will propel academia forward. “We are reinventing education,” declares Agarwal. “This will change the world.”
Online courses aren’t new; big commercial outfits like the University of Phoenix and DeVry University offer thousands of them, and many public colleges allow students to take classes on the Net for credit. So what makes MOOCs different? As Thrun sees it, the secret lies in “student engagement.” Up to now, most Internet classes have consisted largely of videotaped lectures, a format that Thrun sees as deeply flawed. Classroom lectures are in general “boring,” he says, and taped lectures are even less engaging: “You get the worst part without getting the best part.” While MOOCs include videos of professors explaining concepts and scribbling on whiteboards, the talks are typically broken up into brief segments, punctuated by on-screen exercises and quizzes. Peppering students with questions keeps them involved with the lesson, Thrun argues, while providing the kind of reinforcement that has been shown to strengthen comprehension and retention.
Norvig, who earlier this year taught a Udacity class on computer programming, points to another difference between MOOCs and their predecessors. The economics of online education, he says, have improved dramatically. Cloud computing facilities allow vast amounts of data to be stored and transmitted at very low cost. Lessons and quizzes can be streamed free over YouTube and other popular media delivery services. And social networks like Facebook provide models for digital campuses where students can form study groups and answer each other’s questions. In just the last few years, the cost of delivering interactive multimedia classes online has dropped precipitously. That’s made it possible to teach huge numbers of students without charging them tuition.
It’s hardly a coincidence that Udacity, Coursera, and edX are all led by computer scientists. To fulfill their grand promise—making college at once cheaper and better—MOOCs will need to exploit the latest breakthroughs in large-scale data processing and machine learning, which enable computers to adjust to the tasks at hand. Delivering a complex class to thousands of people simultaneously demands a high degree of automation. Many of the labor-intensive tasks traditionally performed by professors and teaching assistants—grading tests, tutoring, moderating discussions—have to be done by computers. Advanced analytical software is also required to parse the enormous amounts of information about student behavior collected during the classes. By using algorithms to spot patterns in the data, programmers hope to gain insights into learning styles and teaching strategies, which can then be used to refine the technology further. Such artificial-intelligence techniques will, the MOOC pioneers believe, bring higher education out of the industrial era and into the digital age.
While their ambitions are vast, Thrun, Koller, and Agarwal all stress that their fledgling organizations are just starting to amass information from their courses and analyze it. “We haven’t yet used the data in a systematic way,” says Thrun. It will be some time before the companies are able to turn the information they’re collecting into valuable new features for professors and students. To see the cutting edge in computerized teaching today, you have to look elsewhere—in particular, to a small group of academic testing and tutoring outfits that are hard at work translating pedagogical theories into software code.
One of the foremost thinkers in this field is a soft-spoken New Yorker named David Kuntz. In 1994, after earning his master’s degree in philosophy and working as an epistemologist, or knowledge theorist, for the Law School Admission Council (the organization that administers the LSAT examinations), Kuntz joined the Educational Testing Service, which runs the SAT college-admission tests. ETS was eager to use the burgeoning power of computers to design more precise exams and grade them more efficiently. It set Kuntz and other philosophers to work on a very big question: how do you use software to measure meaning, promote learning, and evaluate understanding? The question became even more pressing when the World Wide Web opened the Internet to the masses. Interest in “e-learning” surged, and the effort to develop sophisticated teaching and testing software combined with the effort to design compelling educational websites.
Three years ago, Kuntz joined a small Manhattan startup called Knewton as its head of research. The company specializes in the budding discipline of adaptive learning. Like other trailblazers in instructional software, including the University of California-Irvine spinoff ALEKS, Carnegie Mellon’s Open Learning Initiative, and the much celebrated Khan Academy, it is developing online tutoring systems that can adapt to the needs and learning styles of individual students as they proceed through a course of instruction. Such programs, says Kuntz, “get better as more data is collected.” Software for, say, teaching algebra can be written to reflect alternative theories of learning, and then, as many students proceed through the program, the theories can be tested and refined and the software improved. The bigger the data sets, the more adept the systems become at providing each student with the right information in the right form at the right moment.
Knewton has introduced a remedial math course for incoming college students, and its technology is being incorporated into tutoring programs offered by the textbook giant Pearson. But Kuntz believes that we’re only just beginning to see the potential of educational software. Through the intensive use of data analysis and machine learning techniques, he predicts, the programs will advance through several “tiers of adaptivity,” each offering greater personalization through more advanced automation. In the initial tier, which is already largely in place, the sequence of steps a student takes through a course depends on that student’s choices and responses. Answers to a set of questions may, for example, trigger further instruction in a concept that has yet to be mastered—or propel the student forward by introducing material on a new topic. “Each student,” explains Kuntz, “takes a different path.” In the next tier, which Knewton plans to reach soon, the mode in which material is presented adapts automatically to each student. Although the link between media and learning remains controversial, many educators believe that different students learn in different ways. Some learn best by reading text, others by watching a demonstration, others by playing a game, and still others by engaging in a dialogue. A student’s ideal mode may change, moreover, at each stage in a course—or even at different times during the day. A video lecture may be best for one lesson, while a written exercise may be best for the next. By monitoring how students interact with the teaching system itself—when they speed up, when they slow down, where they click—a computer can learn to anticipate their needs and deliver material in whatever medium promises to maximize their comprehension and retention.
Looking toward the future, Kuntz says that computers will ultimately be able to tailor an entire “learning environment” to fit each student. Elements of the program’s interface, for example, will change as the computer senses the student’s optimum style of learning.
Big Data on Campus
The advances in tutoring programs promise to help many college, high-school, and even elementary students master basic concepts. One-on-one instruction has long been known to provide substantial educational benefits, but its high cost has constrained its use, particularly in public schools. It’s likely that if computers are used in place of teachers, many more students will be able to enjoy the benefits of tutoring. According to one recent study of undergraduates taking statistics courses at public universities, the latest of the online tutoring systems seem to produce roughly the same results as face-to- face instruction.
While MOOCs are incorporating adaptive learning routines into their software, their ambitions for data mining go well beyond tutoring. Thrun says that we’ve only seen “the tip of the iceberg.” What particularly excites him and other computer scientists about free online classes is that thanks to their unprecedented scale, they can generate the immense quantities of data required for effective machine learning. Koller says that Coursera has set up its system with intensive data collection and analysis in mind. Every variable in a course is tracked. When a student pauses a video or increases its playback speed, that choice is captured in the Coursera database. The same thing happens when a student answers a quiz question, revises an assignment, or comments in a forum. Every action, no matter how inconsequential it may seem, becomes grist for the statistical mill.
Assembling information on student behavior at such a minute level of detail, says Koller, “opens new avenues for understanding learning.” Previously hidden patterns in the way students navigate and master complex subject matter can be brought to light.
The number-crunching also promises to benefit teachers and students directly, she adds. Professors will receive regular reports on what’s working in their classes and what’s not. And by pinpointing “the most predictive factors for success,” MOOC software will eventually be able to guide each student onto “the right trajectory.” Koller says she hopes that Lake Wobegon, the mythical town in which “all students are above average,” will “come to life.”
MIT and Harvard are designing edX to be as much a tool for educational research as a digital teaching platform, Anant Agarwal says. Scholars are already beginning to use data from the system to test hypotheses about how people learn, and as the portfolio of courses grows, the opportunities for research will proliferate. Beyond generating pedagogical insights, Agarwal foresees many other practical applications for the edX data bank. Machine learning may, for instance, pave the way for an automated system to detect cheating in online classes, a challenge that is becoming more pressing as universities consider granting certificates or even credits to students who complete MOOCs.
With a data explosion seemingly imminent, it’s hard not to get caught up in the enthusiasm of the MOOC architects. Even though their work centers on computers, their goals are deeply humanistic. They’re looking to use machine learning to foster student learning, to deploy artificial intelligence in the service of human intelligence. But the enthusiasm should be tempered by skepticism. The benefits of machine learning in education remain largely theoretical. And even if AI techniques generate genuine advances in pedagogy, those breakthroughs may have limited application. It’s one thing for programmers to automate courses of instruction when a body of knowledge can be defined explicitly and a student’s progress measured precisely. It’s a very different thing to try to replicate on a computer screen the intricate and sometimes ineffable experiences of teaching and learning that take place on a college campus.
The promoters of MOOCs have a “fairly naïve perception of what the analysis of large data sets allows,” says Timothy Burke, a history professor at Swarthmore College. He contends that distance education has historically fallen short of expectations not for technical reasons but, rather, because of “deep philosophical problems” with the model. He grants that online education may provide efficient training in computer programming and other fields characterized by well-established procedures that can be codified in software. But he argues that the essence of a college education lies in the subtle interplay between students and teachers that cannot be simulated by machines, no matter how sophisticated the programming.
Alan Jacobs, a professor of English at Wheaton College in Illinois, raises similar concerns. In an e-mail to me, he observed that the work of college students “can be affected in dramatic ways by their reflection on the rhetorical situations they encounter in the classroom, in real-time synchronous encounters with other people.” The full richness of such conversations can’t be replicated in Internet forums, he argued, “unless the people writing online have a skilled novelist’s ability to represent complex modes of thought and experience in prose.” A computer screen will never be more than a shadow of a good college classroom. Like Burke, Jacobs worries that the view of education reflected in MOOCs has been skewed toward that of the computer scientists developing the platforms.
Flipping the Classroom
The designers and promoters of MOOCs don’t suggest that computers will make classrooms obsolete. But they do argue that online instruction will change the nature of teaching on campus, making it more engaging and efficient. The traditional model of instruction, where students go to class to listen to lectures and then head off on their own to complete assignments, will be inverted. Students will listen to lectures and review other explanatory material alone on their computers (as some middle-school and high-school students already do with Khan Academy videos), and then they’ll gather in classrooms to explore the subject matter more deeply—through discussions with professors, say, or through lab exercises. In theory, this “flipped classroom” will allocate teaching time more rationally, enriching the experience of both professor and student.
Here, too, there are doubts. One cause for concern is the high dropout rate that has plagued the early MOOCs. Of the 160,000 people who enrolled in Norvig and Thrun’s AI class, only about 14 percent ended up completing it. Of the 155,000 students who signed up for an MIT course on electronic circuits earlier this year, only 23,000 bothered to finish the first problem set. About 7,000, or 5 percent, passed the course. Shepherding thousands of students through a college class is a remarkable achievement by any measure—typically only about 175 MIT students finish the circuits course each year—but the dropout rate highlights the difficulty of keeping online students attentive and motivated. Norvig acknowledges that the initial enrollees in MOOCs have been an especially self-motivated group. The real test, particularly for on-campus use of online instruction, will come when a broader and more typical cohort takes the classes. MOOCs will have to inspire a wide variety of students and retain their interest as they sit in front of their computers through weeks of study.
The greatest fear among the critics of MOOCs is that colleges will rush to incorporate online instruction into traditional classes without carefully evaluating the possible drawbacks. Last fall, shortly before he cofounded Coursera, Andrew Ng adapted his Stanford course on machine learning so that online students could participate, and thousands enrolled. But at least one on-campus student found the class wanting. Writing on his blog, computer science major Ben Rudolph complained that the “academic rigor” fell short of Stanford’s standards. He felt that the computerized assignments, by providing automated, immediate hints and guidance, failed to encourage “critical thinking.” He also reported a sense of isolation. He “met barely anyone in [the] class,” he said, because “everything was done alone in my room.” Ng has staunchly defended the format of the class, but the fact is that no one really knows how an increasing stress on computerized instruction will alter the dynamics of college life.
The leaders of the MOOC movement acknowledge the challenges they face. Perfecting the model, says Agarwal, will require “sophisticated inventions” in many areas, from grading essays to granting credentials. This will only get harder as the online courses expand further into the open-ended, exploratory realms of the liberal arts, where knowledge is rarely easy to codify and the success of a class can hinge on a professor’s ability to guide students toward unexpected insights. The outcome of this year’s crop of MOOCs should tell us a lot more about the value of the classes and the role they’ll ultimately play in the educational system.
At least as daunting as the technical challenges will be the existential questions that online instruction raises for universities. Whether massive open courses live up to their hype or not, they will force college administrators and professors to reconsider many of their assumptions about the form and meaning of teaching. For better or worse, the Net’s disruptive forces have arrived at the gates of academia.
Nicholas Carr is the author of The Shallows: What the Internet Is Doing to Our Brains. His last article for MIT Technology Review was “The Library of Utopia.”
Amazon recently proved it isn’t infallible when it shut down a human resources system that was systematically biased against women. However, there’s more to the story that today’s enterprise leaders should know.
When people talk about machine learning masters, Amazon is always top-of-mind. For more than two decades, the company’s recommendation capabilities have been coveted by others hoping to imitate it. However, even Amazon hasn’t mastered machine learning completely, as evidenced by a biased HR system it shut down. What may be surprising to some is the reality of the underlying situation, which is that biased data isn’t just a technical problem, it’s a business problem.
Specifically, Reuters and others recently reported that since 2014 Amazon had been using a recruiting engine that was systematically biased against women seeking technical positions. It doesn’t necessarily follow that Amazon is biased against tech-savvy women, but the situation does seem to indicate that the historical data used to train the system included more males than females.
Historically, more men have held technical positions than women, generally speaking, not just at Amazon. At the present time, the world is comprised of about half men and half women, with one sex more predominant in some cultures than others. However, women hold 26% of “professional computing occupations”. If the dataset represents that three out of four workers in a technical position are men, then it follows an AI trained on the data would reflect the underlying data.
Amazon is now faced with a public relations fiasco even though it abandoned the system. According to a spokesperson, it “was never used by Amazon recruiters to evaluate candidates.” It was used in a trial phase, never independently and never rolled out to a larger group. The project was abandoned a couple years ago for many reasons, including that it never returned strong candidates for a role. Interestingly, the company claims that bias wasn’t the issue.
If bias isn’t the issue, then what is?
There’s no doubt that the outcome of Amazon’s HR system was biased. Biased data produces biased outcomes. However, there is another important issue not identified by Amazon or other some media, which is data quality.
For years, organizations have been hearing about the need for good-quality data. For one thing, good-quality data is more reliable than bad-quality data. Just about every business wants to use analytics to make better business decisions, but not everyone is thinking about the quality of the data that is being relied upon to make such decisions. Data is also used to train AI systems, so the quality of that data should be top-of-mind. Sadly, in an HR context, bad data is the norm.
“If they’d asked us, I would have said starting with resumes is a bad idea,” said Kevin Parker, CEO of hiring intelligence company HireVue. “It will never work, particularly when you’re looking at resumes for training data. “
As if the poor quality of resume data wasn’t enough to derail Amazon’s project, add job descriptions. Job descriptions are often poorly written, so the likely result is a system that attempts to match attributes from one pool of poor quality data with another pool of poor-quality data.
Bias is a huge issue, regardless
Humans tend to be naturally biased creatures. Since humans have created and are still behind the creation of data, it only stands to reason that their biases will be reflected in the data. While there are ways of correcting for bias, it isn’t as simple as pressing a button. One must be able to identify the bias in the first place and should also understand the context of that bias.
“We think of resumes as a representation of the person, but let’s go to the person and get to the root of what we’re trying to do, and try to figure out if the person is a great match for this particular job. Are they empathetic? Are they great problem solvers? Are they great analytical thinkers? All of the things that define success in a job or role,” said HireVue’s Parker.
HireVue is building its own AI models that are correlated to performance in customer organizations.
“[The models are] validated. We do a lot of work to eliminate bias in the training data and we can prove it arithmetically,” said Parker. “The underlying flaw is don’t start with resumes because it won’t end well.”
HireVue looks at the data collected during the course of a 20 to 30-minute video interview. During that time, it’s able to collect tens of thousands of data points. Its system is purportedly capable of showing an arithmetic before and after, so if all successful people in a particular role are middle-aged white men but the same level of success is desired from a more diverse workforce, then what are the underlying competencies and work-related skills is the company seeking?
“By understanding the attributes of the best, middle and poor performers in an organization, an AI model can be built [that looks] for those attributes in a video interview so you can know almost in real-time if a candidate is a good candidate or not and respond to each in a different way,” said Parker.
Recruitment software and marketplace ScoutExchange analyzes the track record of individual recruiters to identify the types of biases they’ve exhibited over time, such as whether they hired more men than women or whether they tend to prefer candidates from certain colleges or universities over others.
“There’s bias in all data and you need a strategy to deal with it or you’re going to end up results you don’t like and you won’t use [the system],” said Ken Lazarus, CEO of ScoutExchange. “The people at Amazon are pretty smart and pretty good at machine learning and recommendations, but it points out the real difficulty of trying to match humans without any track record. We look at a recruiter’s track record so we can remove bias. Everyone needs a strategy to do that or you’re not going to get anywhere.”
The three things to take away from Amazon’s situation are these:
1 – Despite all the hype about machine learning, it isn’t perfect. Even Amazon doesn’t get everything right all the time. No organization or individual does.
2 – Bias isn’t the sole domain of statisticians and data scientists. Business and IT leaders need to be concerned about it because bias can have very real business impacts as Amazon’s gaffe demonstrates.
3 – Data quality matters. Data quality is not considered as hot a topic as AI, but the two go hand-in-hand. Data is AI brain food.
[For more about data bias in AI, check out these articles.]
Lisa Morgan is a freelance writer who covers big data and BI for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit. Frequent areas of coverage include … View Full Bio
Click on the different category headings to find out more. You can also change some of your preferences. Note that blocking some types of cookies may impact your experience on our websites and the services we are able to offer.
Essential Website Cookies
These cookies are strictly necessary to provide you with services available through our website and to use some of its features.
Because these cookies are strictly necessary to deliver the website, you cannot refuse them without impacting how our site functions. You can block or delete them by changing your browser settings and force blocking all cookies on this website.
Google Analytics Cookies
These cookies collect information that is used either in aggregate form to help us understand how our website is being used or how effective our marketing campaigns are, or to help us customize our website and application for you in order to enhance your experience.
If you do not want that we track your visist to our site you can disable tracking in your browser here:
Other external services
We also use different external services like Google Webfonts, Google Maps and external Video providers. Since these providers may collect personal data like your IP address we allow you to block them here. Please be aware that this might heavily reduce the functionality and appearance of our site. Changes will take effect once you reload the page.
Google Webfont Settings:
Google Map Settings:
Vimeo and Youtube video embeds: