Blog Archive
Promoting statistical literacy: a proposal
Why do our institutions â" particularly banks â" fail to grasp the most rudimentary basics of password security?
Here's a modest proposal: what if the government took it on board to promote a reasonable, sane grasp of risk, security, and probability? Or, if you're a "Big Society/Small Government" LibCon, how about a more modest mandate still: we could ask the state to leave off promoting statistical innumeracy and the inability to understand risk and reward.
Start with the lottery: in the US, its slogan is "Lotto: You've Got to Be In It to Win It". A more numerate slogan would be "Lotto: Your Chance of Finding the Winning Ticket in the Road is Approximately the Same as Your Chance of Buying it". The more we tell people that there is a meaning gap between the one-in-a-squillion chance of finding the winning ticket and the one-in-several-million chance of buying it, the more we encourage the statistical fallacy that events are inherently more likely if they're very splashy and interesting to consider.
Blocking bingo? T Wouldn ', what it means to lose all the money a wonderful lessons through a voluntary tax on innumeracy? Maybe, but if you get rid of the lottery could lead to a slight increase in sense of risk and safety, think about the society-wide saving money is not spent on the alarmist newspaper, the charlatan child protection schemes, MMR scare and how!
Once we get rid of the lottery, let 'S attack banks. It 's not bad enough that they collect huge bonuses from the state, destroying the economy, they also systematically disorder our ability to understand risk and security on the basis of all more farcical stream "respect" hoops and Bizarro World "Security" action!
For example, my own bank, the Co-op, recently updated its business banking site (the old one was "best viewed with Windows 2000!"), "modernising" it with a new two-factor authentication scheme in the form of a little numeric keypad gadget you carry around with you. When you want to see your balance, you key a Pin into the gadget, and it returns a 10-digit number, which you then have to key in the browser field, helpfully mask your keystrokes when you enter this huge one-time password.
Don't get me wrong: two-factor authentication makes perfect sense, and there's nothing wrong with using it to keep users' passwords out of the hands of keyloggers and other surveillance creeps. But a system that locks users out after three bad tries does not need to generate a 10-digit one-time password: the likelihood of guessing a modest four- or five-digit password in three tries is small enough that no appreciable benefit comes out of the other digits (but the hassle to the Co-op's many customers of these extra numbers, multiplied by every login attempt for years and years to come, is indeed appreciable).
As if to underscore the Co-op's security illiteracy, we have this business of masking the one-time Pin as you type it. The whole point of a one-time password is that it no it does not matter if it leaks, because it only works times . It 's why we call it "one-off contact." Asking customers to key in the meaningless 10-digit code perfectly, every time, without visual feedback, ISN' T security. It 's sadism.
It gets worse: the Pin you use with the gadget is your basic four-digit Pin, but numbers can't be sequential. This has the effect of reducing the keyspace by an enormous factor â" a bizarrely contrarian move from a bank that "improves" its security by turning this constrained four-digit number into a whopping 10-digit one. Does the Co-op love or loathe large keyspaces? Both, it seems.
It's not just the Co-op, of course â" this is endemic to the whole industry. For example, Citibank UK requires you to input your password by chasing a tiny, on-screen, all-caps password with your mouse-pointer, in the name of preventing a keylogger from capturing your password as you type it. This has the neat triple-play effect of slicing the keyspace in half (and more) by eliminating special characters and lower-case letters; incentivising customers to use shorter, less secure passwords because of the hassle of inputting them; and leaving it vulnerable to all screen-recorders, which just make movies, what keys you mouse.
It wasn't easy â" the branch staff couldn't believe that I had won an exception to this weird policy â" but in the end, they opened the account for me. Now, like a mouse that's found an experimental lever that only sometimes gives up a pellet, I find myself repeatedly pressing it, hoping to hit on the magical combination that will get my bank to behave as though security was something that a reasonable, sane person could understand, as opposed to a magic property that arises spontaneously in the presence of sufficient obfuscation and bureaucracy.
The irony, of course, is that all banks will tell you that they 'Re just put you through hell pointless security, because the FSA or any other body, put them to him. Regulators strenuously denies this, saying that they only have to specify the principles - "you will know your client" - not a particular practice.
Which brings me back to my modest proposal: let's empower our regulators to fine banks that create nonsensical, incoherent security practices involving idolatrous worship of easy-to-forge utility bills and headed paper, in the name of preserving our national capacity to think critically about security.
Even if it doesn't kill the power of the tabloids to sell with screaming headlines about paedos, terrorists and vaccinations, it would, at least, be incredibly satisfying to keep your money in an institution that appears to have the most rudimentary grasp of what security is and where it comes from.
- Data and computer security
- Internet
- Computation
Robot warfare: call for tighter controls
Conferences will raise concerns over unpiloted aircraft and ground machines that choose their own targets
The rapid proliferation of military drone planes and armed robots should be subject to international legal controls, conferences in London and Berlin will argue this month.
Public awareness of attacks by unmanned aerial vehicles (UAVs), such as Reapers and Predators, in Afghanistan and Pakistan has grown but less is known of the evolution of unmanned ground vehicles (UGVs).
Two conferences â" Drone Wars in London on 18 September and a three-day workshop organised by the International Committee for Robot Arms Control(ICRAC) in Berlin on 20-22 September - will hear calls for bans and stricter rules in the framework of international treaties on arms limitation.
British academics and policy experts, Red Cross representatives, peace activists, military advisers, human rights lawyers and those opposed to the arms trade are participating in the German meeting.
Prominent among them is Noel Sharkey, professor of robotics and artificial intelligence at Sheffield University and a judge on the BBC series Robot Wars, who is speaking at both gatherings.
The development of what is known as "autonomous targeting" â" where unmanned planes and military ground vehicles are engineered to lock automatically on to what their onboard computers assume is the enemy â" has heightened concern.
Research is under way at enabling UAVs and UGVs to work in collaborative swarms, ensuring each machine selects a different target. This has reinforced fears that UAV strikes along the Afghanistan-Pakistan border and in the Horn of Africa â" or wherever future wars are fought â" will increase death tolls.
RAF pilots already operate armed drones from Creech US air force base in the Nevada desert. Eight thousand miles away from the frontline they control the release of Hellfire missiles and Paveway bombs against Taliban targets.
Through a freedom of information request submitted to the Ministry of Defence, the Oxford-based Fellowship of Reconciliation â" the group organising the Drone Wars conference â" found that as of April this year RAF-controlled Reapers had opened fire on 84 occasions so far this year.
Defence equipment manufacturers insist that there is always "a man in the [control] loop" to authorise operations and that they are far less indiscriminate than the high level air force saturation bombing that occurred in the second world war. Since there is no onboard pilot at risk, so the argument goes, they do not always have to fire first.
Philip Alston, a UN human rights special rapporteur, warned last autumn that US use of drones to kill militants in Afghanistan and Pakistan may violate international law. He called on the US to explain the legal basis for killing individuals with its drones.
"More than 40 countries have robotic programmes now," said Sharkey. "Even Iran has launched a UAV bomber with a range of several hundred miles.
"These [robotic] systems are difficult to develop but easy to copy. In the states a large proportion of robot making is being moved to Michigan to compensate for the decline in the car industry.
"Increasingly [the manufacturers] are talking about the 'man on the loop', where one person can control a swarm of robots. Our biggest concern for the future is autonomous systems that [select] targets themselves."
Dr Steve Wright, a reader in applied global ethics also at Leeds Metropolitan University who will speak at an ICRAC workshop on the dangers of terrorists obtaining drones, said: "We need a new treaty to limit proliferation. All the arms fairs now are selling UAVs. It's naive to think they will remain in the hands of governments."
- Robots
- Arms trade
Google Engineer Fired For Spying On Teen Users; Serious Privacy Concerns Raised
What's still rather alarming, however, is that this was possible, and that, despite all of Google's claims of security and procedures to keep these things from happening, the news did not come out until Google was alerted to the actions by parents of some of the teens involved. Google is notoriously secretive on these issues, and its "statement" on this matter, frankly, is pretty weak:
"We dismissed David Barksdale for breaking Google's strict internal privacy policies. We carefully control the number of employees who have access to our systems, and we regularly upgrade our security controls--for example, we are significantly increasing the amount of time we spend auditing our logs to ensure those controls are effective. That said, a limited number of people will always need to access these systems if we are to operate them properly--which is why we take any breach so seriously."That doesn't explain anything about how Google makes sure these kinds of things won't happen again. I certainly can understand that there's always going to need to be some people who can access certain systems, but the question is what Google does to make sure that access is not just limited, but monitored to avoid serious abuses like this. At a time when Google is under such strict scrutiny for privacy issues, this news and Google's response are simply unacceptable.
Permanent link | Comments | Email This Story