sexta-feira, 22 de maio de 2015

Syria and Iraq ... Islamic State.

One day this April, instead of coming home from school, two teenagers left their valley high in the Caucasus, and went off to war.

In Minneapolis, Minnesota, a 20-year-old stole her friend's passport to make the same hazardous journey.

From New Zealand, came a former security guard; from Canada, a hockey fan who loved to fish and hunt.

And there have been many, many more: between 16,000 and 17,000, according to one independent Western estimate, men and a small number of women from 90 countries or more who have streamed to Syria and Iraq to wage Muslim holy war for the Islamic State.

Abu Bakr Al-Baghdadi, the group's leader, has appealed to Muslims throughout the world to move to lands under its control — to fight, but also to work as administrators, doctors, judges, engineers and scholars, and to marry, put down roots and start families.

"Every person can contribute something to the Islamic State," a Canadian enlistee in Islamic State, Andre Poulin, says in a videotaped statement that has been used for online recruitment. "You can easily earn yourself a higher station with God almighty for the next life by sacrificing just a small bit of this worldly life."

The contingent of foreigners who have taken up arms on behalf of Islamic State during the past 3 1/2 years is more than twice as big as the French Foreign Legion. The conflict in Syria and Iraq has now drawn more volunteer fighters than past Islamist causes in Afghanistan and the former Yugoslavia — and an estimated eight out of 10 enlistees have joined Islamic State.

They have been there for defeats and victories. Following major losses in both Syria and Iraq, the fighters of Islamic State appear to have gotten a second wind in recent days, capturing Ramadi, capital of Iraq's largest Sunni province, and the ancient city of Palmyra, famous for its 2,000-year-old ruins.

There are battle-hardened Bosnians and Chechens, prized for their experience and elan under fire. There are religious zealots untested in combat but eager to die for their faith.

They include around 3,300 Western Europeans and 100 or so Americans, according to the International Center for the Study of Radicalization, a think tank at King's College London.

Ten to 15 percent of the enlistees are believed to have died in action. Hundreds of others have survived and gone home; their governments now worry about the consequences.

"We all share the concern that fighters will attempt to return to their home countries or regions, and look to participate in or support terrorism and the radicalization to violence," Nicholas J. Rasmussen, director of the U.S. government's National Counterterrorism Center, told a Senate hearing earlier this year.

"Just like Osama bin Laden started his career in international terrorism as a foreign fighter in Afghanistan in the 1980s, the next generation of Osama bin Ladens are currently starting theirs in Syria and Iraq," ICSR director Peter Neumann told a White House summit on combating extremist violence in February.

One problem in choking off the flow of recruits has been the variety of their profiles and motives.

Associated Press reporters on five continents tracked some of those who have left to join Islamic State, and found people born into the Islamic faith as well as converts, adventurers, educated professionals and people struggling to cope with disappointing lives.

"There is no typical profile," according to a study by German security authorities, obtained by AP.

The study reported that among people leaving that country for Syria out of "Islamic extremist motives," 65 percent were believed to have prior criminal records. They ranged in age between 15 and 63. Sixty-one percent were German-born, and there were nine men for every woman.

In contrast, John G. Horgan, a psychologist who directs the Center for Terrorism & Security Studies at the University of Massachusetts Lowell, found some common traits among American recruits or would-be recruits for jihad. Typically, he said, they are in their late teens or early 20s, though a few have been in their mid-30s.

"From a psychological perspective, many of them are at a stage in their lives where they are trying to find their place in the world — who they are, what their purpose is," Horgan said. "They certainly describe themselves as people who are struggling with conflict. They are trying to reconcile this dual identity of being a Muslim and being a Westerner, or being an American."

Some are driven by religious zeal to protect the caliphate, or Muslim theocracy, that the Islamic State has proclaimed in the one-third of Syrian and Iraqi territory now in its hands; others are thrilled by the chance to join what is tantamount to a secret and forbidden club.

Still others appear to enlist mainly because others do.

"What they have in common is that they are young, they are impressionable and they are hungry for excitement," Horgan said.

Once recruits arrive in areas held by Islamic State, they appear to receive only rudimentary military training — including how to load and fire a Kalashnikov assault rifle. Nonetheless, they have been involved in "some of the most violent forms of attacks" by the group, including suicide bombings and filmed beheadings of foreigners, said William Braniff, executive director for the National Consortium for the Study of Terrorism and Responses to Terrorism, a multidisciplinary research center headquartered at the University of Maryland.

Areeb Majeed, 23, from a suburb of Mumbai, India, joined Islamic State in May 2014 and fought for six months, killing up to 55 people and taking a gunshot to the chest.

But all was not heroics. He eventually called his parents from Turkey and asked to come home, according to Indian newspapers. Majeed's chief complaint, officials from India's National Investigation Agency were quoted as saying, was that the group didn't pay him, and made him clean toilets and haul water on the battlefield.

Often, though, the foreign combatants use social media to serve as "role models and facilitators for the next volunteers," Braniff said.

"Before I came here to Syria, I had money, I had a family, I had good friends, it wasn't like I was some anarchist or somebody who just wants to destroy the world, to kill everybody," said Poulin, the Canadian ISIS recruiter.

"Put God almighty before your family, put it before yourself, put it before everything. Put Allah before everything," the bearded and bespectacled transplant from Ontario urges in the video.

Poulin's jihad ended last August; he was reported killed during an assault on a government-controlled airfield in northern Syria.

But not, according to the Canadian Broadcasting Corp., before he had recruited five others from Toronto to come fight for the Islamic State.

segunda-feira, 9 de fevereiro de 2015

François Hollande "guerra total".

Ab Dull‎

>>> O Ocidente quer banir o Federação Russa do Swift, um sistema que integra mundialmente os bancos . Isso ser ruim para os bancos do Federação Russa.
Com os fracassos das sanções econômicas impostas a Moscou., após anexação da Península da Crimeia , com os Patriótas pró-Rússia mostrando vigor , sim , conquistaram em menos de tres meses cerca de 600 quilômetros quadrados de território, 

Ocidente agora cogita soluções radicais para vencer o conflito.
Sabe-se, porém, que presidente francês, François Hollande, fala no possibilidade de uma "guerra total". François Hollande fala quando se preparava para uma reunião com Vladimir Putin em Moscou na sexta-feira.

Este frase "guerra total"., foi utilizado por traste Reichsminister für Volksaufklärung und Propaganda Joseph Goebbels no Palácio dos Esportes, em Berlim.

Este foi o auge do discurso do traste Reichsminister für Volksaufklärung und Propaganda Joseph Goebbels ...

"Eu lhes pergunto: Vocês querem a guerra total?" Interrompido por aplausos e gritos de "sim", traste Reichsminister für Volksaufklärung und Propaganda insiste pergunta.

"Vocês a querem, se necessário, mais total e radical do que a podemos imaginar hoje?".
E novamente ovação e um "sim" da plateia.

Bem o resultado final foi a a bandeira da União Soviética tremulando sobre o Reichstag. Colocada lá por Camarada comissario oficial político Oleksi Prokopovich Berest , Camarada Mikhail Alekseevich Yegorov e Camarada Meliton Varlamis dze Kantaria que a desfraldaram, triunfantes.

Presidente francês, François Hollande, e seu assecla , o Megatherium, Laurent Fabius , ministro dos Negócios Estrangeiros , diria no minimo xaropes viu , mas não apenas este , são eles pedras no tabuleiro , um tabuleiro que ter jogadas como derrubada de preço do petróleo por sauditas , para ferrar o Iran , para ferrar os russos , dentre outras jogadas temerarias praticadas pelo Ocidente que se acha acima de tudo , com poder para impor suas vontades.

Soma-se que Barack Obama , disse terá mais "custos e consequências" para Moscou. Barack Obama querer armar os Ucranianos ,

Sim , banir o Federação Russa do Swift . , Barack Obama armar seus esqualidos aliados, Petro Oleksiyovych Porkoshenko e nazistóides Banderovets, Putin , que agora ser refem de seus altos indices de popularidade ira acovardar-se ??? . Este vai acabar mal .

Angela Dorothea Merkel diz não apoiar armar os Ucranianos , acho que ela não quer a bandeira da Russia tremulando sobre o Reichstag.

Com tanta ignorancia e maledicencia espalhada , acredito que em breve os vivos irão invejar os mortos .

Hezbollah in Syria _ ابطال حـــــزب الله في ســــوريا

Hezbollah in Syria _ ابطال حـــــزب الله في ســــوريا

At least 20 other Lebanese people sustained injuries in the explosion which happened on a bus in the central district of Souq al-Hamadiyeh in Damascus.

The bus was carrying Shia pilgrims who were visiting holy sites in Damascus.

Hezbollah’s condemnation

The Lebanese resistance movement Hezbollah also released a statement to hit out at the deadly attack.

Hezbollah said in the statement that such an act of terrorism parallels “barbarism”, noting that the attack serves the plots of the Tel Aviv regime in the region.

"This brutal bombing represents evidence of the barbarism that is simmering in the hearts of those terrorists who are serving the criminal Zionist entity and achieving its scheme," Hezbollah said.

Hezbollah further urged bringing to justice the perpetrators of the attack, calling those behind it criminals in the hands of the Israeli regime.

“Bombings, destruction of shrines and vandalizing sanctities carried out by those criminals around the world must be the catalyst for all the rational and vital forces of the nation and of the world to focus efforts on fighting" and terminating all those who have become "a criminal tool in the hands of the Zionist entity," Hezbollah underlined.

"This bombing is an episode of a series of bombings that target pilgrims in Syria, civilians in Iraq and worshipers in Pakistan, and claim the lives of dozens of martyrs", Hezbollah said.

Syria has been grappling with a deadly crisis since March 2011. The violence fueled by Takfiri groups has so far claimed the lives of over 200,000 people, according to reports. New figures show that over 76,000 people, including thousands of children, lost their lives in Syria last year.

The Takfiri terrorist groups, with members from several Western countries, control swathes of land in Syria and Iraq, and have been carrying out horrific acts of violence such as public decapitations and crucifixions against all communities such as Shias, Sunnis, Kurds, and Christians.

Western powers and some of their regional allies especially Jordan, Qatar, Saudi Arabia and Turkey -- are supporting the Takfiri terrorists operating against the government of Syrian President Bashar al-Assad.

domingo, 25 de janeiro de 2015

The algorithm

In central London this spring, eight of the world’s greatest minds performed on a dimly lit stage in a wood-panelled theatre. An audience of hundreds watched in hushed reverence. This was the closing stretch of the 14-round Candidates’ Tournament, to decide who would take on the current chess world champion, Viswanathan Anand, later this year.

Each round took a day: one game could last seven or eight hours. Sometimes both players would be hunched over their board together, elbows on table, splayed fingers propping up heads as though to support their craniums against tremendous internal pressure. At times, one player would lean forward while his rival slumped back in an executive leather chair like a bored office worker, staring into space.

Then the opponent would make his move, stop his clock, and stand up, wandering around to cast an expert glance over the positions in the other games before stalking upstage to pour himself more coffee. On a raised dais, inscrutable, sat the white-haired arbiter, the tournament’s presiding official. Behind him was a giant screen showing the four current chess positions. So proceeded the fantastically complex slow-motion violence of the games, and the silently intense emotional theatre of their players.

When Garry Kasparov lost his second match against the IBM supercomputer Deep Blue in 1997, people predicted that computers would eventually destroy chess, both as a contest and as a spectator sport. Chess might be very complicated but it is still mathematically finite. Computers that are fed the right rules can, in principle, calculate ideal chess variations perfectly, whereas humans make mistakes. Today, anyone with a laptop can run commercial chess software that will reliably defeat all but a few hundred humans on the planet. Isn’t the spectacle of puny humans playing error-strewn chess games just a nostalgic throwback?

Such a dismissive attitude would be in tune with the spirit of the times. Our age elevates the precision-tooled power of the algorithm over flawed human judgment. From web search to marketing and stock-trading, and even education and policing, the power of computers that crunch data according to complex sets of if-then rules is promised to make our lives better in every way.

Automated retailers will tell you which book you want to read next; dating websites will compute your perfect life-partner; self-driving cars will reduce accidents; crime will be predicted and prevented algorithmically. If only we minimise the input of messy human minds, we can all have better decisions made for us. So runs the hard sell of our current algorithm fetish.

    If we let cars do the driving, we are outsourcing not only our motor control but also our moral judgment

But in chess, at least, the algorithm has not displaced human judgment. The imperfectly human players who contested the last round of the Candidates’ Tournament — in a thrilling finish that, thanks to unusual tiebreak rules, confirmed the 22-year-old Norwegian Magnus Carlsen as the winner, ahead of former world champion Vladimir Kramnik — were watched by an online audience of 100,000 people. In fact, the host of the streamed coverage, the chatty and personable international master Lawrence Trent, pointedly refused to use a computer engine (which he called ‘the beast’) for his own analyses and predictions.

The idea, he explained, is to try to figure things out for yourself. During a break in the commentary room on the day I was there, Trent was eating crisps and still eagerly discussing variations with his plummily amusing co-presenter, Nigel Short (who himself had contested the World Championship against Kasparov in 1993). ‘He’ll find Qf4; it’s not difficult to find,’ Short assured Trent. ‘Ng8, then it’s…’ ‘It’s game over.’ ‘Game over!’

Chess is an Olympian battle of wits. As with any sport, the interest lies in watching profoundly talented humans operating at the limits of their capability. There does exist a cyborg version of the game, dubbed ‘advanced chess’, in which humans are allowed to use computers while playing. But it is profoundly boring to watch, like a contest over who can use spreadsheet software more effectively, and hasn’t caught on. The ‘beast’ can be a useful helpmeet — Veselin Topalov, a previous challenger for Anand’s world title, used a 10,000-CPU monster in his preparation for that match, which he still lost — but it’s never going to be the main event.

This is a lesson that the algorithm-boosters in the wider culture have yet to learn. And outside the Platonically pure cosmos of chess, when we seek to hand over our decision-making to automatic routines in areas that have concrete social and political consequences, the results might be troubling indeed.

At first thought, it seems like a pure futuristic boon — the idea of a car that drives itself, currently under development by Google. Already legal in Nevada, Florida and California, computerised cars will be able to drive faster and closer together, reducing congestion while also being safer. They’ll drop you at your office then go and park themselves.

What’s not to like? Well, for a start, as the mordant critic of computer-aided ‘solutionism’ Evgeny Morozov points out, the consequences for urban planning might be undesirable to some. ‘Would self-driving cars result in inferior public transportation as more people took up driving?’ he wonders in his new book, To Save Everything, Click Here (2013).

More recently, Gary Marcus, professor of psychology at New York University, offered a vivid thought experiment in The New Yorker. Suppose you are in a self-driving car going across a narrow bridge, and a school bus full of children hurtles out of control towards you. There is no room for the vehicles to pass each other. Should the self-driving car take the decision to drive off the bridge and kill you in order to save the children?

What Marcus’s example demonstrates is the fact that driving a car is not simply a technical operation, of the sort that machines can do more efficiently. It is also a moral operation. (His example is effectively a kind of ‘trolley problem’, of the sort that has lately been fashionable in moral philosophy.) If we let cars do the driving, we are outsourcing not only our motor control but also our moral judgment.

Meanwhile, as Morozov relates, a single Californian company called Impermium provides software to tens of thousands of websites to automatically flag online comments for ‘not only spam and malicious links, but all kinds of harmful content — such as violence, racism, flagrant profanity, and hate speech’. How do Impermium’s algorithms decide exactly what should count as ‘hate speech’ or obscenity? No one knows, because the company, quite understandably, isn’t going to give away its secrets. Yet rather than pursuing mere lexicographical analysis, such a system of automated pre-censorship is, again, making moral judgments.

If self-driving cars and speech-policing systems are going to make hard moral decisions for us, we have a serious stake in knowing exactly how they are programmed to do it. We are unlikely to be content simply to trust Google, or any other company, not to code any evil into its algorithms. For this reason, Morozov and other thinkers say that we need to create a class of ‘algorithmic auditors’ — trusted representatives of the public who can peer into the code to see what kinds of implicit political and ethical judgments are buried there, and report their findings back to us. This is a good idea, though it poses practical problems about how companies can retain the commercial edge provided by their computerised secret sauce if they have to open up their algorithms to quasi-official scrutiny.

If we answer yes, we are giving our blessing to something even more nebulous than thoughtcrime. Call it ‘unconscious brain-state crime’

A further problem is that some algorithms positively must be kept under wraps in order to work properly. It is already possible, for example, for malicious operators to ‘game’ Google’s autocomplete results — sending abusive or libellous descriptions to the top of Google’s suggestions when you type a person’s name — and lawsuits from people affected in this way have already forced the company to delve into the system and change such examples manually. If it were made public exactly how Google’s PageRank algorithm computes the authority of web pages, or how Twitter’s ‘trending’ algorithm determines the popularity of subjects, then unscrupulous self-marketers or vengeful exes would soon be gaming those algorithms for their own purposes too. The vast majority of users would lose out, because the systems would become less reliable.

And it doesn’t necessarily require a malicious individual gaming a system for algorithms to get uncomfortably personal. Automatic analysis of our smartphone geolocation, internet-browsing and social-media data-trails grows ever more sophisticated, and so we can thin-slice demographic categories ever more precisely.

From such information it is possible to infer personal details (such as sexual orientation or use of illegal drugs) that have not been explicitly supplied, and sometimes to identify unique individuals. Even when such information is simply used to target adverts more accurately, the consequences can be uncomfortable. Last year, the journalist Charles Duhigg related a telling anecdote in an article for The New York Times called ‘How Companies Learn Your Secrets’.

A decade ago, the American retailer Target sent promotional baby-care vouchers to a teenage girl in Minneapolis. Her father was so outraged, he went to the shop to complain. The manager was equally taken aback and apologised; a few days later, he called the family to apologise again. This time, it was the father who offered an apology: his daughter really was pregnant, and Target’s ‘predictive analytics’ system knew it before he did.

Such automated augury might be considered relatively harmless if its use is confined to figuring out what products we might like to buy. But it is not going to stop there. One day in the near future — perhaps this has already happened — an innocent crime novelist researching bloody techniques for his latest fictional serial killer will find armed men banging on his door in the middle of the night, because he left a data trail that caused lights to flash red in some preventive-policing algorithm.

Perhaps a few distressed writers is a price we are willing to pay to prevent more murders. But predictive crime prevention is an area that leads rapidly to a dystopian sci-fi vision like that of the film Minority Report (2002).

In Baltimore and Philadelphia, software is already being used to predict which prisoners will reoffend if released. The software works on a crime database, and variables including geographic location, type of crime previously committed, and age of prisoner at previous offence. In so doing, according to a report in Wired in January this year, ‘The software aims to replace the judgments parole officers already make based on a parolee’s criminal record.’ Outsourcing this kind of moral judgment, where a person’s liberty is at stake, understandably makes some people uncomfortable.

First, we don’t yet know whether the system is more accurate than humans. Secondly, even if it is more accurate but less than completely accurate, it will inevitably produce false positives — resulting in the continuing incarceration of people who wouldn’t have reoffended. Such false positives undoubtedly occur, too, in the present system of human judgment, but at least we might feel that we can hold those making the decisions responsible. How do you hold an algorithm responsible?

Still more science-fictional are recent reports claiming that brain scans might be able to predict recidivism by themselves. According to a press release for the research, conducted by the American non-profit organisation the Mind Research Network, ‘inmates with relatively low anterior cingulate activity were twice as likely to reoffend than inmates with high-brain activity in this region’.

Twice as likely, of course, is not certain. But imagine, for the sake of argument, that eventually a 100 per cent correlation could be determined between certain brain states and future recidivism. Would it then be acceptable to deny people their freedom on such an algorithmic basis? If we answer yes, we are giving our blessing to something even more nebulous than thoughtcrime. Call it ‘unconscious brain-state crime’.

 In a different context, such algorithm-driven diagnosis could be used positively: according to one recent study at Duke University in North Carolina, there might be a neural signature for psychopathy, which the researchers at the laboratory of neurogenetics suggest could be used to devise better treatments. But to rely on such an algorithm for predicting recidivism is to accept that people should be locked up simply on the basis of facts about their physiology.

If we erect algorithms as our ultimate judges and arbiters, we face the threat of difficulties not only in law-enforcement but also in culture. In the latter realm, the potential unintended consequences are not as serious as depriving an innocent person of liberty, but they still might be regrettable. For if they become very popular, algorithmic systems could end up destroying what they feed on.

In the early days of Amazon, the company employed a panel of book critics, whose job was to recommend books to customers. When Amazon developed its algorithmic recommendation engine — an automated system based on data about what others had bought — sales shot up. So Amazon sacked the humans.

Not many people are likely to weep hot tears over a few unemployed literary critics, but there still seems room to ask whether there is a difference between recommendations that lead to more sales, and recommendations that are better according to some other criterion — expanding readers’ horizons, for example, by introducing them to things they would never otherwise have tried. It goes without saying that, from Amazon’s point of view, ‘better’ is defined as ‘drives more sales’, but we might not all agree.

Algorithmic recommendation engines now exist not only for books, films and music but also for articles on the internet. There is so much out there that even the most popular human ‘curators’ cannot possibly keep on top of all of it. So what’s wrong with letting the bots have a go? Viktor Mayer-Schönberger is professor of internet governance and regulation at Oxford University; Kenneth Cukier is the data editor of The Economist. In their book Big Data (2013) — which also calls for algorithmic auditors — they sing the praises of one Californian company, Prismatic, that, in their description, ‘aggregates and ranks content from across the Web on the basis of text analysis, user preferences, social-network-related popularity, and big-data analytics’.

 In this way, the authors claim, the company is able to ‘tell the world what it ought to pay attention to better than the editors of The New York Times’. We might happily agree — so long as we concur with the implied judgment that what is most popular on the internet at any given time is what is most worth reading. Aficionados of listicles, spats between technology theorists, and cat-based modes of pageview trolling do not perhaps constitute the entire global reading audience.

So-called ‘aggregators’ — websites, such as the Huffington Post, that reproduce portions of articles from other media organisations — also deploy algorithms alongside human judgment to determine what to push under the reader’s nose. ‘The data,’ Mayer-Schönberger and Cukier explain admiringly, ‘can reveal what people want to read about better than the instincts of seasoned journalists’. This is true, of course, only if you believe that the job of a journalist is just to give the public what it already thinks it wants to read. Some, such as Cass Sunstein, the political theorist and Harvard professor of law, have long worried about the online ‘echo chamber’ phenomenon, in which people read only that which reinforces their currently held views. Improved algorithms seem destined to amplify such effects.

Some aggregator sites have also been criticised for paraphrasing too much of the original article and obscuring source links, making it difficult for most readers to read the whole thing at the original site. Still more remote from the source is news packaged by companies such as Summly — the iPhone app created by the British teenager Nick D’Aloisio — which used another company’s licensed algorithms to summarise news stories for reading on mobile phones. Yahoo recently bought Summly for $USD30 million.

However, the companies that produce news often depend on pageviews to sell the advertising that funds the production of their ‘content’ in the first place. So, to use algorithm-aided aggregators or summarisers in daily life might help to render the very creation of content less likely in the future. In To Save Everything, Click Here, Evgeny Morozov draws a provocative analogy with energy use:

Our information habits are not very different from our energy habits: spend too much time getting all your information from various news aggregators and content farms who merely repackage expensive content produced by someone else, and you might be killing the news industry in a way not dissimilar from how leaving gadgets in the standby mode might be quietly and unnecessarily killing someone’s carbon offsets.

Meanwhile in education, ‘massive open online courses’ known as MOOCs promise (or threaten) to replace traditional university teaching with video ‘lectures’ online. The Silicon Valley hype surrounding these MOOCs has been stoked by the release of new software that automatically marks students’ essays. Computerised scoring of multiple-choice tests has been around for a long time, but can prose essays really be assessed algorithmically? Currently, more than 3,500 academics in the US have signed an online petition that says no, pointing out:

Computers cannot ‘read’. They cannot measure the essentials of effective written communication: accuracy, reasoning, adequacy of evidence, good sense, ethical stance, convincing argument, meaningful organisation, clarity, and veracity, among others.

It would not be surprising if these educators felt threatened by the claim that software can do an important part of their job. The overarching theme of all MOOC publicity is the prospect of teaching more people (students) using fewer people (professors). Will what is left really be ‘teaching’ worth the name?

    One day, the makers of an algorithm-driven psychotherapy app could be sued by the survivors of someone to whom it gave the worst possible advice.

If you are feeling gloomy about the automation of higher education, the death of newspapers, and global warming, you might want to talk to someone — and there’s an algorithm for that, too. A new wave of smartphone apps with eccentric titular orthography (iStress, myinstantCOACH, MoodKit, BreakkUp) promise a psychotherapist in your pocket. Thus far they are not very intelligent, and require the user to do most of the work — though this second drawback could be said of many human counsellors too.

Such apps hark back to one of the legendary milestones of ‘artificial intelligence’, the 1960s computer program called ELIZA. That system featured a mode in which it emulated Rogerian psychotherapy, responding to the user’s typed conversation with requests for amplification (‘Why do you say that?’) and picking up — with its ‘natural-language processing’ skills — on certain key words from the input. Rudimentary as it is, ELIZA can still seem spookily human. Its modern smartphone successors might be diverting, but this field presents an interesting challenge in the sense that, the more sophisticated it gets, the more potential for harm there will be. One day, the makers of an algorithm-driven psychotherapy app could be sued by the survivors of someone to whom it gave the worst possible advice.

What lies behind our current rush to automate everything we can imagine? Perhaps it is an idea that has leaked out into the general culture from cognitive science and psychology over the past half-century — that our brains are imperfect computers. If so, surely replacing them with actual computers can have nothing but benefits. Yet even in fields where the algorithm’s job is a relatively pure exercise in number- crunching, things can go alarmingly wrong.

Indeed, a backlash to algorithmic fetishism is already under way — at least in those areas where a dysfunctional algorithm’s effect is not some gradual and hard-to-measure social or cultural deterioration but an immediate difference to the bottom line of powerful financial organisations. High-frequency trading, where automated computer systems buy and sell shares very rapidly, can lead to the price of a security fluctuating wildly.

 Such systems were found to have contributed to the ‘flash crash’ of 2010, in which the Dow Jones index lost 9 per cent of its value in minutes. Last year, the New York Stock Exchange cancelled trades in six stocks whose prices had exhibited bizarre behaviour thanks to a rogue ‘algo’ — as the automated systems are known in the business — run by Knight Capital; as a result of this glitch, the company lost $440 million in 45 minutes. Regulatory authorities in Europe, Hong Kong and Australia are now proposing rules that would require such trading algorithms to be tested regularly; in India, an algo cannot even be deployed unless the National Stock Exchange is allowed to see it first and decides it is happy with how it works.

Here, then, are the first ‘algorithmic auditors’. Perhaps their example will prompt similar developments in other fields — culture, education, and crime — that are considerably more difficult to quantify, even when there is no immediate cash peril.

A casual kind of post-facto algorithmic auditing was already in evidence in London, at the Candidates’ Tournament. All the chess players gave press conferences after their games, analysing critical positions and showing what they were thinking. This often became a second contest in itself: players were reluctant to admit that they had missed anything (‘Of course, I saw that’), and vied to show they had calculated more deeply than their adversaries.

 On the day I attended, the amiable Anglophile Russian player (and cricket fanatic) Peter Svidler was discussing his colourful but peacefully concluded game with Israel’s Boris Gelfand, last year’s World Championship challenger. Juggling pieces on a laptop screen with a mouse, Svidler showed a complicated line that had been suggested by someone using a computer program. ‘This, apparently, is a draw,’ Svidler said, ‘but there’s absolutely no way anyone can work this out at the board’. The computer’s suggestion, in other words, was completely irrelevant to the game as a sporting exercise.

Now, as the rumpled Gelfand looked on with friendly interest, Svidler jumped to an earlier possible variation that he had considered pursuing during their game, ending up with a baffling position that might have led either to spectacular victory or chaotic defeat. ‘For me,’ he announced, ‘this will be either too funny … or not funny enough’. Everyone laughed. As yet, there is no algorithm for wry comedy.