Jump to content


Photo
* * * - - 2 votes

driverless cars to hit california ( sorry!)


  • Please log in to reply
1898 replies to this topic

#1751 Bloggsworth

Bloggsworth
  • Member

  • 8,696 posts
  • Joined: April 07

Posted 30 April 2018 - 16:36

Interesting article, I concur.

 

http://www.latimes.c...0430-story.html



Advertisement

#1752 GreenMachine

GreenMachine
  • Member

  • 1,570 posts
  • Joined: March 04

Posted 30 April 2018 - 23:20

Yes, interesting.

 

The analogy with the aircraft industry is fatally (sorry  ;)) flawed.  Firstly, that is a mature industry, with mature products.  Secondly, it is the users that are sharing data, not the manufacturers/vendors (at least in the first instance).

 

Yes, transparency would help alleviate concerns about safety, but I doubt it will happen in the start-up phase of new industry sector especially when traditional players are being challenged with aggressive disrupters.  Transparency will require decisive intervention by transport regulation agencies - that may happen in Europe, but USA and China ... I don't see it happening there, and it really needs to be global for competition reasons.



#1753 desmo

desmo
  • Tech Forum Host

  • 19,975 posts
  • Joined: January 00

Posted 01 May 2018 - 13:57

I don't see how liability suits resulting from fatal accidents can be defended without revealing the technologies employed.  "Something went wrong inside this black box we keep secret" won't work as a defense.  In fact, a lot of the law that applies to AVs won't begin to come into focus until there has been a slew of liability cases that aren't quietly settled but are brought to trial in open court with discovery rules. 



#1754 Bloggsworth

Bloggsworth
  • Member

  • 8,696 posts
  • Joined: April 07

Posted 01 May 2018 - 15:45

Yes, interesting.

 

The analogy with the aircraft industry is fatally (sorry  ;)) flawed.  Firstly, that is a mature industry, with mature products.  Secondly, it is the users that are sharing data, not the manufacturers/vendors (at least in the first instance).

 

Yes, transparency would help alleviate concerns about safety, but I doubt it will happen in the start-up phase of new industry sector especially when traditional players are being challenged with aggressive disrupters.  Transparency will require decisive intervention by transport regulation agencies - that may happen in Europe, but USA and China ... I don't see it happening there, and it really needs to be global for competition reasons.

 

And a fledgeling industry can't learn the lessons of a mature one - I would effing well hope it could and, indeed, should.



#1755 GreenMachine

GreenMachine
  • Member

  • 1,570 posts
  • Joined: March 04

Posted 01 May 2018 - 21:26

I don't see how liability suits resulting from fatal accidents can be defended without revealing the technologies employed.


And that is why there will be an uncharacteristic rush to settle after accidents, as seen in the most recent fatality accident.



#1756 desmo

desmo
  • Tech Forum Host

  • 19,975 posts
  • Joined: January 00

Posted 02 May 2018 - 18:59

The brighter civil attorneys representing victims of AV technology gone bad will quickly figure out that a deep pocketed defendant desperate to avoid a public trial is a gift that will keep giving.



#1757 Bloggsworth

Bloggsworth
  • Member

  • 8,696 posts
  • Joined: April 07

Posted 08 May 2018 - 20:23

Quote from The Daily Telegraph:

 

A self-driving Uber may have spotted the pedestrian killed in the first fatal crash with a driverless car but ignored her anyway.

 

The self-driving car which killed 49-year-old Elaine Herzberg in Phoenix, Arizona in March saw the pedestrian as a "false positive", causing its on-board system to decide to ignore her rather than swerve to avoid the crash, according to The Information.

 

The car's sensors detected the pedestrian, but according to Uber's internal investigation into the crash, the self-driving car had been tuned to ignore obstacles it didn't deem a risk.

 

Well, that's reassuring - Not.



#1758 gruntguru

gruntguru
  • Member

  • 6,821 posts
  • Joined: January 09

Posted 08 May 2018 - 22:19

False positive eh? It will be interesting to learn the process that arrived at that decision. How many sensors did/did-not detect the pedestrian?

 

"Tuned"??? Perhaps it had an aftermarket chip fitted?



#1759 Greg Locock

Greg Locock
  • Member

  • 5,708 posts
  • Joined: March 03

Posted 09 May 2018 - 03:30

As I understand it the sensors were fine, it's the decision that was made to classify her as an ignorable entity that is in question. Random guess, bicycle wheels classified as rolling tumbleweed. Rather like deciding to tread on a stick that turns out to be a brown snake.



Advertisement

#1760 Bloggsworth

Bloggsworth
  • Member

  • 8,696 posts
  • Joined: April 07

Posted 09 May 2018 - 11:35

Sometimes I think, in London, being tuned to ignore pedestrians would make progress much easier, be it neither ethical nor moral...



#1761 Bloggsworth

Bloggsworth
  • Member

  • 8,696 posts
  • Joined: April 07

Posted 09 May 2018 - 17:05

As I understand it the sensors were fine, it's the decision that was made to classify her as an ignorable entity that is in question. Random guess, bicycle wheels classified as rolling tumbleweed. Rather like deciding to tread on a stick that turns out to be a brown snake.

 

That's all right then, no need to be worried about software developers who don't understand how people use and misuse roads...



#1762 Greg Locock

Greg Locock
  • Member

  • 5,708 posts
  • Joined: March 03

Posted 10 May 2018 - 09:37

Uh, please point out where I have ever said it was fine. I think several people at Uber should be looking at jail. If the guy working on the software called himself a software ENGINEER, him as well.



#1763 Ray Bell

Ray Bell
  • Member

  • 67,077 posts
  • Joined: December 99

Posted 19 May 2018 - 21:43

Originally posted by Bloggsworth
Well, that's reassuring - Not.


Nothing about this stuff is in any way either assuring or reassuring...

I dread the thought of self-driving cars altogether, and while I don't think at this stage they'll cause me to choose not to share the road with them, I'd certainly rather I didn't have the choice.

What really amazes me is that officialdom can accept it. How?

There are so many things which are regulated to the hilt and they are not dangerous to anything near the extent that these devices will ultimately become.

And there are other queries I have about them.

1. We live in a world where automation and computerisation is taking away jobs from humans, with the attendant potential for unemployment and poverty. Does having self-driving cars help this at all?

2. All of us have experienced (I assume) blind alleys down which the GPS is likely to lead us, won't that apply with these devices too? Will it know where every bit of roadworks is taking place, where every road-closing accident has taken place, or will it insist on taking the 'shortest' or 'fastest' route no matter what may have cropped up?

3. Component failure is a fact of life. Even if it's rare, it happens. Look at the lengths to which the aircraft industy goes to avoid it, yet eventually something catches them out and dozens or hundreds die and it has to be reworked. Yet they have rigorous maintenance schedules, so what chance to self-driving cars have in a real world?

4. Some of the sensors will rely on 'seeing' the road ahead. What happens if it's snowing, raining heavily or in fog?



.

Edited by Ray Bell, 19 May 2018 - 21:44.


#1764 Greg Locock

Greg Locock
  • Member

  • 5,708 posts
  • Joined: March 03

Posted 19 May 2018 - 23:40

Falling snow/rain/fog/(smoke) are one reason governing the selection of the test cities at the moment.

 

Lidar as it is currently implemented certainly has problems with these, here's a paper on that which I have not yet had time to read https://www.adv-radi...s-9-49-2011.pdf

 

An alternative is to switch sensors, eg http://web.media.mit.../~guysatat/fog/

 

Of course the obvious answer is to slow down in any of these conditions, otherwise you'll be playing auto-snooker with everybody else.

 

Snow on the ground is a particular issue in itself.



#1765 Bloggsworth

Bloggsworth
  • Member

  • 8,696 posts
  • Joined: April 07

Posted 24 May 2018 - 18:30

https://www.independ...l-a8367926.html

 

Strange kind of autonomous system that hands back emergency braking to the "Operator" in an emergency, too late for said operator to do anything about it...



#1766 Charlieman

Charlieman
  • Member

  • 1,815 posts
  • Joined: October 09

Posted 25 May 2018 - 10:30

https://www.independ...l-a8367926.html

 

Strange kind of autonomous system that hands back emergency braking to the "Operator" in an emergency, too late for said operator to do anything about it...

The link describes the fatal accident between a pedestrian pushing a bicycle and an Uber system-controlled AV. The explanation from the USA NTSB for the AV's failure to brake seemed muddled to me at first. I read other reports which clarified the NTSB statement. Bizarrely they confirmed that the Uber AV did not have autonomous emergency braking capability or appropriate operator warning systems.

 

I'm amazed and disgusted that anyone would test such a functionally limited AV on the road. I trust that established principles of engineering ethics will be applied to further investigations.



#1767 mariner

mariner
  • Member

  • 1,846 posts
  • Joined: January 07

Posted 25 May 2018 - 10:40

There is a report in today's Financial Times (FT) summarizing the NTSB preliminary review of the Tempe fatality.

 

The NTSB say the AV systems first identified an " unknown object "  via LIDAR six seconds before impact, then defined it as bicycle followed by the decision that emergency braking was required 1.3 seconds before impact..

 

So traveling at 43mph , about  20 meters per second, it took 4.5 seconds or 90 meters to decide to brake. 

 

The really alarming part is what happened, or didn't happen next according to the NTSB report. As reported by the FT "The car did not  attempt to stop itself at that point"  as a result of a deliberate system design decision by Uber. Quoting  the NTSB " the vehicle operator is relied upon to intervene and take action. The system is not designed to alert the operator" " the NTSB said.

 

Those are  some of the most chilling words I have read recently. IF true it isn't about the very slow reaction of the AV system, or the lack of attention by the woman "driver" it was a basic design decision by human engineers .

 

My apologies for repeating Bloggsworth post info.! I didn't see his bit higher up. 

 

Nonetheless  the report is truly shocking to my mind. It seems to me there is a huge gap between the standards for aircraft imposed by the FAA etc and what the same governments have allowed the AV industry.


Edited by mariner, 25 May 2018 - 10:46.


#1768 Cig35

Cig35
  • Member

  • 531 posts
  • Joined: March 15

Posted 25 May 2018 - 10:59

https://www.independ...l-a8367926.html

 

Strange kind of autonomous system that hands back emergency braking to the "Operator" in an emergency, too late for said operator to do anything about it...

I think what they mean is that the emergency braking system, both hardware and software, supplied by the car manufacturer is disconnected while the car is in autonomous mode, and any braking should then be triggered by the sensors and software used by the Uber system.

 

(Autoliv, which is a huge car safety developer and manufacturer is right now spinning off part of their business, Veoneer, which is developing the way a car should "move" between autonomous and manual driving and how that human/machine interface and transfer of control should work. Their development manager is saying that we will probably see autonomous vehicles in a couple of years time, but they will only be for commercial use and will only be able to operate within quite restricted areas and under specific conditions. But she cannot see fully autonomous vehicles being able to operate in all types of conditions and in an unrestricted geographical area being realistically available during her lifetime.)



#1769 Greg Locock

Greg Locock
  • Member

  • 5,708 posts
  • Joined: March 03

Posted 25 May 2018 - 11:11

The NTSB preliminary report is actually fairly clear on what happened. Basically the AV dithered for 4.7 seconds, then handed control back to the human driver, without alerting her to the presence of the victim.

 

I may be reading between the lines but it looks to me like astonishing incompetence and disregard for safety.



#1770 Charlieman

Charlieman
  • Member

  • 1,815 posts
  • Joined: October 09

Posted 25 May 2018 - 11:55

I think what they mean is that the emergency braking system, both hardware and software, supplied by the car manufacturer is disconnected while the car is in autonomous mode, and any braking should then be triggered by the sensors and software used by the Uber system.

That question is the one which prompted me to look for more information.

* Volvo's emergency safety systems were turned off in order that the car could be operated by Uber's AV system.

* Uber's AV test system does not have a function for emergency braking; the operator is supposed to take control.

* Uber's AV test system does not alert the operator that it has identified an emergency.

 

The NTSB preliminary report is actually fairly clear on what happened. Basically the AV dithered for 4.7 seconds, then handed control back to the human driver, without alerting her to the presence of the victim.

 

I may be reading between the lines but it looks to me like astonishing incompetence and disregard for safety.

 

It's hard to understand the decision processes. Maybe there were half a dozen teams working to solve problems and "emergency braking" was always somebody else's problem. 

 

I'm an AV sceptic but this incident looks to be more about Uber than AV.



#1771 Bloggsworth

Bloggsworth
  • Member

  • 8,696 posts
  • Joined: April 07

Posted 25 May 2018 - 12:49

I think what they mean is that the emergency braking system, both hardware and software, supplied by the car manufacturer is disconnected while the car is in autonomous mode, and any braking should then be triggered by the sensors and software used by the Uber system.

 

(Autoliv, which is a huge car safety developer and manufacturer is right now spinning off part of their business, Veoneer, which is developing the way a car should "move" between autonomous and manual driving and how that human/machine interface and transfer of control should work. Their development manager is saying that we will probably see autonomous vehicles in a couple of years time, but they will only be for commercial use and will only be able to operate within quite restricted areas and under specific conditions. But she cannot see fully autonomous vehicles being able to operate in all types of conditions and in an unrestricted geographical area being realistically available during her lifetime.)

 

Is that what they said or what you would like it to mean?



#1772 GreenMachine

GreenMachine
  • Member

  • 1,570 posts
  • Joined: March 04

Posted 25 May 2018 - 22:33

Without knowing the details, I (again :rolleyes:) point to the role of 'human factors'.

 

There was a driver.  The driver did not intervene (at all?  in time?).  So what was the driver doing, sleeping? texting?  Why was the driver disengaged from the driving task?  At least we can be pretty confident about that, it was the fact that the driver was a passenger while the automatic systems were operating.  Never mind that the 'driver' had a moral and (I presume) a legal responsibility to be in control, or to take control when required, the reality is that as humans are relegated to passenger-like roles, they behave like passengers.  Perhaps the key question is what are the operators of these trials doing to deal with this fatality-inducing condition.

 

I have earlier pointed out that the interregnum between fully manual and fully automatic driving will be the period of highest risk, and this appears to being borne out.



#1773 Greg Locock

Greg Locock
  • Member

  • 5,708 posts
  • Joined: March 03

Posted 25 May 2018 - 23:22

According to the NTSB she was looking at the diagnostics displayed on the ipad mounted to the dashboard. 1.3 seconds before the collision the car handed control back to her without a warning, and did not draw attention to the need for braking or evasive action. So, if we take it at face value, in 1.3 seconds she had to (a) stop looking at the ipad and realise she was in control (she grabbed the wheel about 1 second before impact) and (b) spot the victim, and © brake.

 

You need to read the report. It's all there. Your version is incorrect.



#1774 Greg Locock

Greg Locock
  • Member

  • 5,708 posts
  • Joined: March 03

Posted 25 May 2018 - 23:27

Oh, and the boring answer is, they should do what I have to do if I am performing a non standard test. Complete an FMEA and get it signed off and implement whatever the corrective actions are.



#1775 GreenMachine

GreenMachine
  • Member

  • 1,570 posts
  • Joined: March 04

Posted 26 May 2018 - 10:10

I stand corrected Greg, she wasn't texting.  But she wasn't watching the road, which was the point I was making.  As you point out, she had to change gears fast, reorient herself from being a technician to being a driver, and there just wasn't enough time (apparently).

 

Maybe these tests should involve two people, a technician to worry about the systems, and a driver to worry about who or what they are going to hit, and to take corrective action if that looks likely.



#1776 Dmitriy_Guller

Dmitriy_Guller
  • Member

  • 4,854 posts
  • Joined: July 01

Posted 27 May 2018 - 07:10

That sounds like a cartoonish level of incompetence.  The car doesn't do emergency braking itself, something that many affordable passenger cars already do today, but instead kicks it back to the human to react properly?  Without even giving a hint as to what may be wrong?  "Here, take this, kthxbye."



#1777 Greg Locock

Greg Locock
  • Member

  • 5,708 posts
  • Joined: March 03

Posted 27 May 2018 - 22:44

I suspect what actually happened was that the current level of software was designed around having two people in the car, one a driver and one to monitor the system. In that case AEB is redundant, and so may have left that for later. Then as a cost reduction they got rid of the full time driver, leaving the remaining person to do both jobs. If that train of events is correct then it is reminiscent of the Challenger launch. Nothing catastrophic went wrong in the previous launches, so it's OK to launch this time... 



#1778 GreenMachine

GreenMachine
  • Member

  • 1,570 posts
  • Joined: March 04

Posted 28 May 2018 - 07:46

... except they were warned about the problem.  The launch (iirc) had been postponed because of the low temperature, and the manager(s) over-ruled the techies (maybe the temperatures weren't quite as low as caused the postponement?), they pushed the button ... and oops, maybe they are too low  :blush: 

 

You could be right Greg.  Save money on the second person, oh wait, we didn't factor in the compensation/damages claim(s).



#1779 Charlieman

Charlieman
  • Member

  • 1,815 posts
  • Joined: October 09

Posted 29 May 2018 - 13:00

...oh wait, we didn't factor in the compensation/damages claim(s).

For the benefit of younger people... In 1971, Ford determined that a redesign of a Pinto model (which was prone to rear end fuel tank accidents) was more expensive to fix than paying money to accident victims. It's a case study taught in classes about engineering ethics and about how engineers make mistakes. I attended similar short courses about industrial noise and pollution.



Advertisement

#1780 BRG

BRG
  • Member

  • 17,515 posts
  • Joined: September 99

Posted 31 May 2018 - 21:18

I know that it is about a Tesla in autopilot mode, not a true driverless car, but this story made me smile.  It had to be a cop car that the bozo hit!

 

And it was in California, so it fits the thread so well!



#1781 GreenMachine

GreenMachine
  • Member

  • 1,570 posts
  • Joined: March 04

Posted 23 June 2018 - 05:42

Without knowing the details, I (again :rolleyes:) point to the role of 'human factors'.
 
There was a driver.  The driver did not intervene (at all?  in time?).  So what was the driver doing, sleeping? texting?  Why was the driver disengaged from the driving task?  At least we can be pretty confident about that, it was the fact that the driver was a passenger while the automatic systems were operating.  Never mind that the 'driver' had a moral and (I presume) a legal responsibility to be in control, or to take control when required, the reality is that as humans are relegated to passenger-like roles, they behave like passengers.  Perhaps the key question is what are the operators of these trials doing to deal with this fatality-inducing condition.
 
I have earlier pointed out that the interregnum between fully manual and fully automatic driving will be the period of highest risk, and this appears to being borne out.


So, now we know what she was doing and, surprise!, it was not what she was supposed to be doing. http://www.abc.net.a...r-crash/9902208

 

From previous reports we may surmise that her assets are not going to worth the expense of a lawsuit, so Uber will be the real target.



#1782 Greg Locock

Greg Locock
  • Member

  • 5,708 posts
  • Joined: March 03

Posted 23 June 2018 - 23:28

And so it should be. it's a systems failure. I may be repeating myself but if I want to do a non standard driving test, or evaluate something iffy on a car, then I have to perform an FMEA  https://en.wikipedia...ffects_analysis and get it signed off by various managers. For instance, as an outcome from an FMEA, when we test rollover on SUVs we fit outrigger bars that prevent them falling over. Obviously these affect the moment of inertia of the vehicle and the results, so in my program I correlate to the results with the outriggers on, but for the certification signoff runs I remove them from the model. That's not ideal, but it is safer than rolling real cars.

 

I can see no reason why 57 years (or more) of safety critical engineering should be ignored because Silicon Valley.



#1783 Bloggsworth

Bloggsworth
  • Member

  • 8,696 posts
  • Joined: April 07

Posted 24 June 2018 - 07:25

Greg, Silicon Valley believe that they are superior beings and that different rules should apply...



#1784 Greg Locock

Greg Locock
  • Member

  • 5,708 posts
  • Joined: March 03

Posted 24 June 2018 - 09:10

...and since they still have to obey the rules of the land, they will find out where 'disruption' ends and responsibility starts.



#1785 Bloggsworth

Bloggsworth
  • Member

  • 8,696 posts
  • Joined: April 07

Posted 29 June 2018 - 16:36

Today's news:

 

https://www.youtube....h?v=icMbXn9kz9c



#1786 Lee Nicolle

Lee Nicolle
  • Member

  • 9,985 posts
  • Joined: July 08

Posted 02 July 2018 - 09:23

Nothing about this stuff is in any way either assuring or reassuring...

I dread the thought of self-driving cars altogether, and while I don't think at this stage they'll cause me to choose not to share the road with them, I'd certainly rather I didn't have the choice.

What really amazes me is that officialdom can accept it. How?

There are so many things which are regulated to the hilt and they are not dangerous to anything near the extent that these devices will ultimately become.

And there are other queries I have about them.

1. We live in a world where automation and computerisation is taking away jobs from humans, with the attendant potential for unemployment and poverty. Does having self-driving cars help this at all?

2. All of us have experienced (I assume) blind alleys down which the GPS is likely to lead us, won't that apply with these devices too? Will it know where every bit of roadworks is taking place, where every road-closing accident has taken place, or will it insist on taking the 'shortest' or 'fastest' route no matter what may have cropped up?

3. Component failure is a fact of life. Even if it's rare, it happens. Look at the lengths to which the aircraft industy goes to avoid it, yet eventually something catches them out and dozens or hundreds die and it has to be reworked. Yet they have rigorous maintenance schedules, so what chance to self-driving cars have in a real world?

4. Some of the sensors will rely on 'seeing' the road ahead. What happens if it's snowing, raining heavily or in fog?


Having had problems with the navigation in  red racer [Kia C app D] last year in England, the speed limits shown were often wrong. It was regularly about 300 metres wrong in finding a destination. Bloody annoying in the dark and the drizzle! And the car was only about 3 months old.

My brothers Tom Tom used in France was far more basic but at least was on target!

All this is just a hint of what can and will go wrong.

As for aircraft, they have thousands of miles of sky with all the navigation aids and still hit one and other. Or the scenery :evil:

.



#1787 gruntguru

gruntguru
  • Member

  • 6,821 posts
  • Joined: January 09

Posted 03 July 2018 - 00:18

Having had problems with the navigation in  red racer [Kia C app D] last year in England, the speed limits shown were often wrong. It was regularly about 300 metres wrong in finding a destination. Bloody annoying in the dark and the drizzle! And the car was only about 3 months old.

 

My brothers Tom Tom used in France was far more basic but at least was on target!

All this is just a hint of what can and will go wrong.

As for aircraft, they have thousands of miles of sky with all the navigation aids and still hit one and other. Or the scenery :evil:.

 Yes there will be fatal accidents as a result of AV design flaws and hardware failures.

The question is will AV technology cause fewer fatalities than the current system (human drivers) which kills more than a million people every year. Airline accidents kill less than a thousand people a year (and most of the crashes are human error anyway.)



#1788 RogerGraham

RogerGraham
  • Member

  • 183 posts
  • Joined: October 12

Posted 03 July 2018 - 11:00

 Yes there will be fatal accidents as a result of AV design flaws and hardware failures.

The question is will AV technology cause fewer fatalities than the current system (human drivers) which kills more than a million people every year. Airline accidents kill less than a thousand people a year (and most of the crashes are human error anyway.)

 

Begone, you with your common sense and reason.



#1789 Bloggsworth

Bloggsworth
  • Member

  • 8,696 posts
  • Joined: April 07

Posted 03 July 2018 - 12:42

AV will make barely a dent in the overall death-by-driving statistics for about 50 years given that the highest proportion of road deaths occur in 3rd world countries, starting with China and India with their huge populations, countries in which barely 50% of the citizens have flushing toilets, let alone road systems conducive to AVs. Yes, the US has 3 times the road deaths per 100,000 people than the UK, but how many of those are in the hundreds of thousands of miles of empty countryside with no road markings for the AVs to be guided by?



#1790 gruntguru

gruntguru
  • Member

  • 6,821 posts
  • Joined: January 09

Posted 03 July 2018 - 22:19

Not many I would guess.



#1791 Greg Locock

Greg Locock
  • Member

  • 5,708 posts
  • Joined: March 03

Posted 04 July 2018 - 08:01

Indeed http://www.businessi...09-2011-11?IR=T

 

"AV will make barely a dent in the overall death-by-driving statistics for about 50 years" Oh well, let's not bother then. Who cares about 50 years time?



#1792 PayasYouRace

PayasYouRace
  • RC Forum Host

  • 21,484 posts
  • Joined: January 10

Posted 06 July 2018 - 19:47

AV will make barely a dent in the overall death-by-driving statistics for about 50 years given that the highest proportion of road deaths occur in 3rd world countries, starting with China and India with their huge populations, countries in which barely 50% of the citizens have flushing toilets, let alone road systems conducive to AVs. Yes, the US has 3 times the road deaths per 100,000 people than the UK, but how many of those are in the hundreds of thousands of miles of empty countryside with no road markings for the AVs to be guided by?


Surely in that time AVs will become advanced enough to work on any road, just like a human driver.

#1793 BRG

BRG
  • Member

  • 17,515 posts
  • Joined: September 99

Posted 06 July 2018 - 20:27

Surely in that time AVs will become advanced enough to work on any road, just like a human driver.

Hmm, maybe, but whilst AVs look feasible on first world urban streets, the challenge in developing countries with vague or almost non-existent roads is a lot tougher.  And is there enough profit in meeting that challenge?  When you are living on $2 a day, how are going to afford an expensive AV?  And if it is an EV and there is no mains power in your village....



#1794 PayasYouRace

PayasYouRace
  • RC Forum Host

  • 21,484 posts
  • Joined: January 10

Posted 08 July 2018 - 18:53

Hmm, maybe, but whilst AVs look feasible on first world urban streets, the challenge in developing countries with vague or almost non-existent roads is a lot tougher. And is there enough profit in meeting that challenge? When you are living on $2 a day, how are going to afford an expensive AV? And if it is an EV and there is no mains power in your village....


It would only be a natural development of self driving technology, relying less on particular road markings and furniture and driving more like a human in terms of the positives of human driving ability. As said further up, there’s plenty of dirt tracks and country lanes in developed countries too.

#1795 BRG

BRG
  • Member

  • 17,515 posts
  • Joined: September 99

Posted 09 July 2018 - 16:29

It would only be a natural development of self driving technology, relying less on particular road markings and furniture and driving more like a human in terms of the positives of human driving ability. As said further up, there’s plenty of dirt tracks and country lanes in developed countries too.

Can you imagine how well an AV would get on in India in their manic urban traffic?  It would last about 10 seconds.  All the algorithms in the world can't cope with that sort of insanity!

 

And India isn't even a developing country anymore. 



#1796 Bloggsworth

Bloggsworth
  • Member

  • 8,696 posts
  • Joined: April 07

Posted 10 July 2018 - 07:12

Can you imagine how well an AV would get on in India in their manic urban traffic?  It would last about 10 seconds.  All the algorithms in the world can't cope with that sort of insanity!

 

And India isn't even a developing country anymore. 

 

You're a Luddite - Of course AVs will cope...



#1797 gruntguru

gruntguru
  • Member

  • 6,821 posts
  • Joined: January 09

Posted 11 July 2018 - 03:54

Can you imagine how well an AV would get on in India in their manic urban traffic?  It would last about 10 seconds.  All the algorithms in the world can't cope with that sort of insanity! 

I disagree. An AV could cope with the insanity of Indian roads with less than 0.001% of all the algorithms in the world. Algorithms are pretty smart you know!



#1798 Greg Locock

Greg Locock
  • Member

  • 5,708 posts
  • Joined: March 03

Posted 11 July 2018 - 08:16

I have seen AV data collected from SE Asian roads, so I'm yet again inclined to think that maybe you are not the first person ever to think of this problem.



#1799 Charlieman

Charlieman
  • Member

  • 1,815 posts
  • Joined: October 09

Posted 11 July 2018 - 10:42

Hmm, maybe, but whilst AVs look feasible on first world urban streets, the challenge in developing countries with vague or almost non-existent roads is a lot tougher. 

It's a serious matter for some relatively developed countries with large sparsely populated areas too. Consider the countries which made up the former USSR, some of which have exploited mineral or oil/gas assets for many years, and might be quite well off if the income had been used equitably. Start with the 6.6 million square miles of Russia which has big urban areas where an AV would perform as well as in any European city. Then there's the rest of the country where roads haven't changed much in 100 years. 

 

Things are changing of course. In Mongolia 25 years ago, there were about 30 miles of tarmac highway and everyone carried enough fuel for a 300 mile journey; now there are a few thousand miles of tarmac highway and there are fuel stations between principal destinations. 

 

These are extreme cases. Vehicles -- mostly lorries -- which travel long distances on these roads have been developed to suit them, and sometimes an adapted 4x4 car relies on local truckers for a tow. I'm sure that AV car developers are aware of these scenarios.

 

I'm not sure whether AV developers have fully considered the edge cases in the developed world. I'm thinking of places where an AV can operate on 95% mapped tarmac or consistent surface roads, but has a fuzzy 5% on a journey. If I'm driving a conventional car across a tidal causeway, I want to travel at 20-30 mph; I don't want to be stuck behind an AV struggling to navigate.

 

I watched a natural history documentary last night where a family moved to the Shetland Islands (the northernmost part of the British Isles) to film wildlife. Access to their temporary home was via a sand and shingle beach which became a bit tricky after heavy rainfall; water ran off the hills so the unfortunate driver had to get out to test the depth of streams crossing the beach before continuing the journey.



Advertisement

#1800 Greg Locock

Greg Locock
  • Member

  • 5,708 posts
  • Joined: March 03

Posted 11 July 2018 - 11:53

That's one of the reasons why i think L5 cars are so far off. In order to design a car that can literally go anywhere requires a design that is impractical 99% of the time. I suspect production cars will be L4, and the geographical/operational limits will expand towards L5 without getting there. For instance, a small well mapped country with a moderate climate would be a good candidate for L4 cars, nationally.