I realise this thread is somewhat stale, but I can elaborate a bit on a subject that turns out to be surprisingly complex.
So far as I know, ADS-B doesn't have any means to transmit time information (and it has no particular reason or need to do so). Even if it did, you'd have to trust the aircraft's time, which you can't reliably do. Some might use time from their own GPS receivers, but since it's known that some ADS-B position messages are derived from IRS rather than GPS, there's no particular reason to trust such timestamps even if they existed.
So timestamps must be generated locally — and in software. You can still use the difference between the timestamp of a positional message as a reference against which to do MLAT on a/c that don't transmit positional messages however, there are a few wrinkles.
I've got quite a lot of experience in passive measurement of data networks and designing accurate timestamping hardware, so I can say a few things about timestamping packets/messages with some authority.
In all SDR-based ADS-B receivers, timestamps are generated in user-space after demodulation (by dump1090 in my case). It cannot be anywhere else because even if the hardware had timestamping capability, the hardware only streams I/Q samples and so doesn't know when a new ADS-B message begins.
In general, software timestamping sucks because there's an awful lot of variable and unquantifiable latency between the actual receiver and the software sampling timestamps (USB latency has a lot of jitter because of its polling architecture, then you've got the kernel drivers and user space libraries amongst other considerations). Even kernel-based timestamping is pretty awful, partly because it's limited to µs resolution, but mostly because of the way packets are delivered by the hardware to the kernel. User-space timestamping is usually even worse because of the way the kernel process scheduler works (typical accuracy ± 10 ms).
rtl_sdr has one important advantage: because it receives a constant stream of I/Q samples, it can use the sample count as a timebase (which dump1090 does, indeed, do), but that's only as good as the local oscillator in the SDR receiver. All oscillators drift with both temperature and age, so you can't rely on sample count alone for absolute time but most oscillators are stable enough second-to-second that differential timestamps will cancel out most of the error.
If you only need short-term differential time, counting samples is good enough however, if you need absolute time, you must integrate sample count with system time (as corrected by NTP) using some sort of minimal jitter, continuous convergence algorithm. I can describe how that works, but it won't be relevant to this discussion because dump1090 doesn't appear to do this.
As far as I know, dump1090 only emits timestamps in the SBS output feed, and those are sourced from clock_gettime() ie, system time which will be jittery due to the way NTP works). All other output sources just emit only ADS-B message in one form or another.
fr24feed has native support for several ADS-B sources but since the source isn't open, I have no idea what it does when doing its own decoding.
The rest of the time, fr24feed reads ADS-B messages via a TCP socket. I'm going to guess that dump1090 + fr24feed is the most common of the TCP scenarios, and it seems to prefer AVR which (so far as I know) doesn't include timestamp information.
That means fr24feed must be doing its own timestamping at the end of a very long chain of software. In the dump1090 scenario, that chain goes something like this: kernel USB host interface → user space USB library → dump1090 where decoding takes place → kernel TCP stack → fr24feed where timestamping and final upload takes place. Note that there is multiple kernel/user space switching going on. Each user space process probably doesn't run for its full 10ms timeslice because it will sleep when blocking on I/O, but the latency of all that user/kernel switching is non-trivial.
This jittery, uncompensated chain is unlikely to be appreciably better than NTP jitter (which my pi reports to be 2 ms ± 1 ms, which is comparable with ntp.org's estimates).
The only thing I can say for sure is that fr24feed can't count I/Q samples when it's not doing the decoding, so it is limited to jittery NTP-derived system time. It'd be interesting to know whether fr24feed can do better when it is doing its own sample decoding and whether FR24 prefer radars that use FR24's internal decoder over an external decoder.
TL;DR-
1. using your own stratum 0 clock is probably not significant in terms of MLAT, not least because the mean NTP error is about 0, as opposed to the RMS error from the official NTP pool, which is probably in the order of up to 5 ms, possibly more depending on the stratum of server you get.
2. MLAT referenced to ADS-B position message timestamps is a fine thing, but it's more complex than that because everything depends on the quality of those position message timestamps which, as I hope I've demonstrated, probably isn't that great.
Originally posted by F-EGLF1
View Post
So timestamps must be generated locally — and in software. You can still use the difference between the timestamp of a positional message as a reference against which to do MLAT on a/c that don't transmit positional messages however, there are a few wrinkles.
I've got quite a lot of experience in passive measurement of data networks and designing accurate timestamping hardware, so I can say a few things about timestamping packets/messages with some authority.
In all SDR-based ADS-B receivers, timestamps are generated in user-space after demodulation (by dump1090 in my case). It cannot be anywhere else because even if the hardware had timestamping capability, the hardware only streams I/Q samples and so doesn't know when a new ADS-B message begins.
In general, software timestamping sucks because there's an awful lot of variable and unquantifiable latency between the actual receiver and the software sampling timestamps (USB latency has a lot of jitter because of its polling architecture, then you've got the kernel drivers and user space libraries amongst other considerations). Even kernel-based timestamping is pretty awful, partly because it's limited to µs resolution, but mostly because of the way packets are delivered by the hardware to the kernel. User-space timestamping is usually even worse because of the way the kernel process scheduler works (typical accuracy ± 10 ms).
rtl_sdr has one important advantage: because it receives a constant stream of I/Q samples, it can use the sample count as a timebase (which dump1090 does, indeed, do), but that's only as good as the local oscillator in the SDR receiver. All oscillators drift with both temperature and age, so you can't rely on sample count alone for absolute time but most oscillators are stable enough second-to-second that differential timestamps will cancel out most of the error.
If you only need short-term differential time, counting samples is good enough however, if you need absolute time, you must integrate sample count with system time (as corrected by NTP) using some sort of minimal jitter, continuous convergence algorithm. I can describe how that works, but it won't be relevant to this discussion because dump1090 doesn't appear to do this.
As far as I know, dump1090 only emits timestamps in the SBS output feed, and those are sourced from clock_gettime() ie, system time which will be jittery due to the way NTP works). All other output sources just emit only ADS-B message in one form or another.
fr24feed has native support for several ADS-B sources but since the source isn't open, I have no idea what it does when doing its own decoding.
The rest of the time, fr24feed reads ADS-B messages via a TCP socket. I'm going to guess that dump1090 + fr24feed is the most common of the TCP scenarios, and it seems to prefer AVR which (so far as I know) doesn't include timestamp information.
That means fr24feed must be doing its own timestamping at the end of a very long chain of software. In the dump1090 scenario, that chain goes something like this: kernel USB host interface → user space USB library → dump1090 where decoding takes place → kernel TCP stack → fr24feed where timestamping and final upload takes place. Note that there is multiple kernel/user space switching going on. Each user space process probably doesn't run for its full 10ms timeslice because it will sleep when blocking on I/O, but the latency of all that user/kernel switching is non-trivial.
This jittery, uncompensated chain is unlikely to be appreciably better than NTP jitter (which my pi reports to be 2 ms ± 1 ms, which is comparable with ntp.org's estimates).
The only thing I can say for sure is that fr24feed can't count I/Q samples when it's not doing the decoding, so it is limited to jittery NTP-derived system time. It'd be interesting to know whether fr24feed can do better when it is doing its own sample decoding and whether FR24 prefer radars that use FR24's internal decoder over an external decoder.
TL;DR-
1. using your own stratum 0 clock is probably not significant in terms of MLAT, not least because the mean NTP error is about 0, as opposed to the RMS error from the official NTP pool, which is probably in the order of up to 5 ms, possibly more depending on the stratum of server you get.
2. MLAT referenced to ADS-B position message timestamps is a fine thing, but it's more complex than that because everything depends on the quality of those position message timestamps which, as I hope I've demonstrated, probably isn't that great.
Comment