ABSTRACT

An object detection system for autonomous vehicles, comprising a radar unit and at least one ultra-low phase noise frequency synthesizer, is provided. The radar unit is configured for detecting the presence and characteristics of one or more objects in various directions.

The radar unit may include a transmitter for transmitting at least one radio signal, and a receiver for receiving at least one radio signal returned from one or more objects. The ultra-low phase noise frequency synthesizer may utilize a Clocking device, Sampling Reference PLL, at least one fixed frequency divider, DDS, and main PLL to reduce phase noise from the returned radio signal. This proposed system overcomes deficiencies of the current generation state-of-the-art Radar Systems by providing a much lower level of phase noise which would result in improved performance of the radar system in terms of target detection, characterization, etc. Further, a method for autonomous vehicles is also disclosed.

Radar target detection system for autonomous vehicles with ultra-low phase noise frequency synthesizer
Autonomous vehicles with ultra-low phase noise frequency synthesizer

Autonomous vehicles with ultra-low phase noise frequency synthesizer

An Inventor: Dr. Tal Lavian

Embodiments of the present disclosure are generally related to sensors for autonomous vehicles (for example, Self-Driving Cars) and in particular to systems that provide ultra-low phase noise frequency generation for Radar Applications for autonomous vehicles.

BACKGROUND

Autonomous Cars: Levels of Autonomous Cars:

Sensor Technologies: Simple cars and other types of vehicles are operated by humans and these humans rely on their senses such as sight and sound to understand their environment and use their cognitive capabilities to make decisions according to these inputs. For autonomous cars and other autonomous vehicles, the human senses must be replaced with electronic sensors and the cognitive capabilities by electronic computing power. The most common sensor technologies are as follows: LIDAR (Light Detection and Ranging)—is a technology that measures distance by illuminating its surroundings with laser light and receiving the reflections. However, the maximum power of the laser light needs to be kept limited to make them safe for the eyes, as the laser light can easily be absorbed by the eyes. Such LIDAR systems are usually quite large, expensive, and do not blend in well with the overall design of a car/vehicle. The weight of such systems can be as high as tens of kilograms and the cost can be expensive and in some cases high up to $100,000.Radar (Radio Detection and Ranging)—These days Radar systems can be found as a single chip solution that is lightweight and cost-effective. These systems work very well regardless of lighting or weather conditions and have satisfying accuracy in determining the speed of objects around the vehicle. Having said the above, mainly because of phase uncertainties the resolution of Radar systems is usually not sufficient. Ultrasonic Sensors—These sensors use sound waves and measure their reflections from objects surrounding the vehicle. These sensors are very accurate and work in every type of lighting condition. Ultrasonic sensors are also small and cheap and work well in almost any kind of weather, but that is because of their very short range of a few meters. Passive Visual Sensing—This type of sensing uses cameras and image recognition algorithms. This sensor technology has one advantage that none of the previous sensor technologies have color and contrast recognition. As with any camera-based system, the performance of these systems degrades with bad lighting or adverse weather conditions. The table below is designed to provide a better understanding of the advantages and disadvantages of the different current sensor technologies and their overall contribution to an autonomous vehicle: The following tables scores the different sensors on a scale of 1 to 3, where 3 is the best score:

Item LIDAR RADAR Ultrasonic Camera
Proximity Detection 1 2 3 1
Range 2 2 1 3
Resolution 2 1 1 3
Operation in darkness 3 3 3 1
Operation in light 3 3 3 2
Operation in adverse Weather 2 3 3 1
Identifies color or contrast 1 1 1 3
Speed measurement 2 3 1 1
Size 1 3 3 3
Cost 1 3 3 3
Total 18 24 22 21

As shown in the table above, the available sensors for existing autonomous vehicles are LIDAR, Sonar, passive vision (cameras), and radar. Many of these sensors come with significant drawbacks, while radar systems do not experience most of the drawbacks and thus better among other sensors, based on the table shown above: For example, LIDAR systems have a “dead zone” in their immediate surroundings (as shown in FIG. 30A), while a Radar system will be able to cover the immediate surroundings of a vehicle as well as long-range with enhanced accuracy.In order to eliminate the “dead zone” as much as possible LIDARs are mounted tall above the vehicle (as shown in FIG. 30B). These limits the options of using parking garages cause difficulty in the use of rooftop accessories and finally also make the vehicle less marketable since such a tower does not blend in well with the design of a vehicle.Typical LIDAR systems generate enormous amounts of data which require expensive and complicated computation capabilities, while Radar systems generate only a fraction of this data and reduce the cost and complication of onboard computation systems significantly. For example, some types of LIDAR systems generate amounts of 1-Gb/s data that require a substantial amount of computation by strong computers to process such a high amount of data. In some cases, these massive computations require additional computation and correlation of information from other sensors and sources of information. In some cases, the source for additional computations is based on detailed road information collected over time in databases or in enhanced maps. Computations and correlations can be performed against past information and data collected over time.Typical LIDAR systems are sensitive to adverse weather such as rain, fog, and snow while Radar sensors are not. A radar system will stay reliable and accurate in adverse weather conditions (as shown in FIG. 31). LIDAR systems use mechanical rotation mechanisms that are prone to failure, Radars are solid-state and do not have moving parts and as such have a minimal rate of failures.Typical LIDAR systems rely on a rotation speed of around 5-15 Hz. This means that if a vehicle moves at a speed of 65 mph, the distance the vehicle travels between “looks” is about 10 ft. Radar sensor systems are able to continuously scan their surroundings especially when these systems use one transmitting and one receiving antenna (Bistatic system) (as depicted in FIG. 32). Further, LIDAR systems are not accurate in determining the speed and autonomous vehicles rely on Radar for accurate speed detection.Sonar: Sonar sensors are very accurate but can cover only the immediate surroundings of a vehicle, their range is limited to several meters only. The Radar system disclosed in this patent is capable of covering these immediate surroundings as well and with similar accuracy. Further, Sonar sensors cannot be hidden behind cars’ plastic parts which pose a design problem, Radars can easily be hidden behind these parts without being noticed.Passive Visual Sensing (Cameras): Passive visual sensing uses the available light to determine the surroundings of an autonomous vehicle. In poor lighting conditions, the performance of passive visual sensing systems degrades significantly and is many times dependent on the light that the vehicle itself provides and as such does not provide any benefit over the human eye. Radar systems, on the other hand, are completely agnostic to lighting conditions and perform the same regardless of light (as shown in FIG. 33).Passive visual sensing is very limited in adverse weather conditions such as heavy rain, fog, or snow; Radar systems are much more capable of handling these situations. Passive visual systems create great amounts of data as well which needs to be handled in real-time and thus require expensive and complicated computation capabilities, while Radar systems create much less data that is easier to handle. Passive visual systems cannot see “through” objects while Radar can, which is useful in determining if there are hazards behind vegetation for instance such as the wildlife that is about to cross the road.Further, it is easily understandable that in order to cover all possible scenarios all (or most) of these sensors need to work together as a well-tuned orchestra. But even if that is the case, under adverse lighting and weather condition some sensor types suffer from performance degradation while the Radar performance stays practically stable under all of these conditions. The practical conclusion is that Radar performance is not driven by environmental factors as much as by its own technology deficiencies, or a specific deficiency of one of its internal components that the invention here will solve.Summarizing all of the advantages and disadvantages mentioned above it is clear that Radar systems are efficient in terms of cost, weight, size, and computing power. Radar systems are also very reliable under adverse weather conditions and all possible lighting scenarios. Further, SAR Radar Systems may be implemented to create a detailed picture of the surroundings and even distinguish between different types of material.However, the drawback of existing Radar sensors was the impact on their accuracy due to the phase noise of its frequency source, the synthesizer. Thus, an enhanced system is required (for purposes such as for autonomous vehicles) that may utilize the benefits of the Radar system by mitigating/eliminating the corresponding existing drawbacks. For example, the required enhanced system, in addition to improving common existing Radar systems, should also improve bistatic or multistatic Radar designs that use the same platform or different platforms to transmit and receive for reducing the phase ambiguity that is created by the distance of the transmitting antenna from the receiving antenna by a significant amount.Essentially, a signal that is sent out to cover objects (here: Radar Signal) is not completely spectrally clean but sent out accompanied by phase noise in the shape of “skirt” in the frequency domain, and will meet a similar one in the receiver signal processing once it is received back. In a very basic target detection system, fast-moving objects will shift the frequency to far enough distance from the carrier so that the weak signal that is being received will be outside of this phase noise “skirt”. Slow-moving objects, however, such as cars, pedestrians, bicycles, animals, etc. might create a received signal that is closer to the carrier and weaker than the phase noise and this signal will be buried under this noise and practically will be non-detectable or non-recognizable.More advanced systems use modulated signals (such as FMCW) but the same challenge to identify slow-moving objects remains. The determination of two physically close objects vs. one larger object is also being challenged by phase noise.Another advanced Radar System worth mentioning is Synthetic Aperture Radar (or SAR) that is described in a different section of this disclosure.Many algorithms and methods have been developed to filter out inaccuracies of Radar-based imaging, detection, and other result processing. Some are more computational intensive while others are not. The common to all of them is that they are not able to filter out the inherent phase noise of the Radar system itself.This is crucial since a lot of the information a Radar system relies on, is in the phase of the returning signal. One simple example for this phenomenon is when a Radar signal hits a wall that is perpendicular to the ground or a wall that has an angle that is not 90 degrees relative to the surface, the phases of the return signals will be slightly different and this information could be “buried” under the phase noise of the Radar system.Further, speckle noise is a phenomenon where a received Radar signal includes “granular” noise. Basically, these granular dots are created by the sum of all information that is scattered back from within a “resolution cell”. All of these signals can add up constructively, destructively, or cancel each other out. Elaborate filters and methods have been developed, but all of them function better and with less effort when the signals have a better spectral purity, or in other words better phase noise. One of these methods, just as an example, is the “multiple look” method. When implementing this, each “look” happens from a slightly different point so that the backscatter also looks a bit different. These backscatters are then averaged and used for the final imaging. The downside of this is that the more “looks” are taken the more averaging happens and information is lost as with any averaging.As additional background for this invention, there are few phenomena that need to be laid out here: Doppler Effect: The Doppler Effect is the change in frequency or wavelength of a wave for an observer moving relative to its source. This is true for sound waves, electromagnetic waves, and any other periodic event. Most people know about the Doppler Effect from their own experience when they hear a car that is sounding like a siren approaching, passing by, and then receding. During the approach, the sound waves get “pushed” to a higher frequency, and thus the siren seems to have a higher pitch, and when the vehicle gains distance this pitch gets lower since the sound frequency is being “pushed” to a lower frequency.The physical and mathematical model of this phenomenon is described in the following formula:

SUMMARY