Announcement

Collapse
No announcement yet.

ZeroTier Networking

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Maybe change the schema name of the new data. Then you can query both schema.

    I think the reason most people don't use it, is it's difficult to install mysql, mysql-workbench, and Java. It's a steep hill to climb if you don't have that stuff already. If I was going to do it again, I'd probably write it in C. But the nice thing about Java is the easy threading syntax. Golang would be good too.

    I was going to create a cloud version of the database, with multiple connections, but it would cost about $20 a month, so you'd need 20 users to pay the bill, ha.

    Most people today just install Flightaware on a Raspberry Pi, and be done with it. Let the big boys run the database.

    Comment


    • #62
      Yea, the SQLite is probably going to slow down as the tables get larger. I think you have to "compact" the database every now and then, also.

      Comment


      • #63
        I tried to do some table adjustments on the yyyy-mm-dd tttttt that the different older tables/exisitng data to bigint. Thinking I could export and import it easier in 1:1 data types to what the new one is expecting. But didn't have much success. Seemed to time out.

        I've found some much faster database restores using RAM at disposal tonight.

        Nice fresh SQL install on my desktop
        Instead of the import/export tools loading data files and dumping. Which seems to go line by line.

        I've spent most the evening discovering how to read the prev exported backups. And change the datetime strings into bigint conversion. And subsequent milisecond (adding *1000) and subsequent TZ conversion (it was doing them TO utc when they were already stored UTC)

        It's dumping 11K rows in 2seconds on my PC. Which is a vast improvement

        ala callsign.csv...
        "id";"acid";"utcdetect";"utcupdate";"callsign" ;"fl ight_id";"radar_id"
        1;"C81B14";"2012-06-19 21:42:27.114";"2014-10-01 19:40:29.787";"";1;0

        Gets read and inline converted in RAM like this..

        LOAD DATA INFILE 'callsign.csv'
        INTO TABLE adsb.callsign
        FIELDS TERMINATED BY ";" OPTIONALLY ENCLOSED BY '"'
        LINES TERMINATED BY "\n"
        IGNORE 1 LINES
        (@null,acid,@var1,@var2,callsign,flight_id,radar_i d)
        SET utcdetect = UNIX_TIMESTAMP(convert_tz(@var1,'+00:00','+12:00') )*1000,
        utcupdate = UNIX_TIMESTAMP(convert_tz(@var2,'+00:00','+12:00') )*1000


        Only now - realising it doesn't like something in the new sql structure and keeps giving me foreign key conflicts when restoring the data. Damn

        Posts not to be taken as official support representation - Just a helpful uploader who tinkers

        Comment


        • #64
          Maybe do the modestable first.

          Comment


          • #65
            Looks like I removed a record and threw it off. Ooops. Reverted to a copy of the copy of the copy and imported OK
            (it had 4 lines of 'new' capture app data with 4digit miliseconds that it didn't like too)

            Now for one of the BIG tables.
            Posts not to be taken as official support representation - Just a helpful uploader who tinkers

            Comment


            • #66
              Relatively successful

              Seems all lines imported. These 2am nights are catching up on me though.

              Need to do some queries on the old format dbase vs new and see if I get same results. But promising.

              Doesn't seem like I can export this weeks sqlite data and tack it on the end however. Due auto increment and fligh_id constraints :/

              But all may not be lost
              Posts not to be taken as official support representation - Just a helpful uploader who tinkers

              Comment


              • #67
                I've made a mess of the times... maybe

                Is the writer hard-coding UTC now? Or looking at set sql TZ and so on and converting in.

                Think I need to re-import my historic data and NOT transpose a timezone change it like I did above. (perhaps got confused trying to work it out)

                Running a new reformed query set to handle the new unix time format. I can get the same output as the day of the last Aug 22 entries from original dbase matching. But new Stuff added as of today, with the same lookup is spitting out the results 12hrs out.
                Posts not to be taken as official support representation - Just a helpful uploader who tinkers

                Comment


                • #68
                  It should be using UTC only. The program takes the current time in milliseconds, subtracts the Zone Offset, and then subtracts the Dayliight Savings Time Offset.

                  I don't know where it gets this from but I assume the OS computer time. I checked the numbers in my table with https://currentmillis.com/ and they seem reasonable.

                  Comment


                  • #69
                    oops, seems reasonable in Mysql but I just looked at SQlite and it shows my local time 5 hours ahead??

                    Edit: No, that's not right. The number in the database is UTC time, and shows the time and date + 5 hours which is my zone.
                    Last edited by coupe; 2020-08-31, 14:15.

                    Comment


                    • #70
                      Blast. Have to figure out a trade-off. Its weired csuse I can put the times logged into an epochconverter and the historical are 12hrs out As expected. But new ones are current.

                      Plot thickens. Have to test on original logger to see if similar

                      Worst case reimport it all less the offset i applied during
                      Posts not to be taken as official support representation - Just a helpful uploader who tinkers

                      Comment


                      • #71
                        I'm fast discovering.. that times suck.

                        It may be as I am using from_unixtime stamp to decode the 13chars into a readable time and do a interval - 1 day, that it possibly does an auto conversion to localtimezone during the query... GAH!

                        And yes, I got totally different results with the SQLite too. So it may be how mysql intermediately handles it now.

                        As others find https://stackoverflow.com/questions/...nixtime-as-utc

                        Need to capture some more test data and compare the original results (correctly changing text version from 0724 to 1924hrs) with what is stamped now. Which I could have sworn was for some reason adjusted to localtime already

                        Worst case, reimport all the data without the TZ adjust I did above. And see if it works itself out and new data would continue on.

                        I've re-broken the auto increment flight_id again too! if I could work out where the counter comes from from that gets tagged in the 'target' table I could massage it to keep going from where it's at now
                        Last edited by Oblivian; 2020-09-01, 02:41.
                        Posts not to be taken as official support representation - Just a helpful uploader who tinkers

                        Comment


                        • #72
                          I'm not sure what you are doing there. Maybe just create a new directory, import an empty database, and start over. Start the database up and see what it writes. It should write zulu time as milliseconds. The only place that the timezone is used, is in the Java code and I assume it gets the offset from your OS. It subtracts that and the daylight saving time to get UTC.

                          Comment


                          • #73
                            If you want to convert to local time, take the databse value, add the timezone offset and the daylight savings time offset (in milliseconds), and you should have it.

                            Comment


                            • #74
                              That's kinda how I spotted it. Ill throw up some comparison results at some point.

                              Brand new schema. Import old data (but had to convert it from plain text like it was in VarChar to bigint) which may have been the downfall.

                              Side by side the same select got me results for the last day of capture (22-8). Which gave hope as it appeared both were basically now identical. One on bigint/new schema + new mysql. And the old on the old schema + old sql


                              Putting new times into an epoch converter to confirm both the text record and unixtime record were the same. Which should have made for a correct +12hr display once adjusted

                              So fired up the new capture. And all of a sudden the next new record milisecond unixtime reported as 7pm (time of capture) rather than 7am.

                              But as long as the timestamp comes from local and adjust so should always be right. It's obviously going to be dependant on my server settings and query methods.

                              Posts not to be taken as official support representation - Just a helpful uploader who tinkers

                              Comment


                              • #75
                                Prepare for some real twilightzone..

                                Currently 741PM here. Wed 2-9

                                1598988804576 =
                                GMT: Tuesday, 1 September 2020 7:33:24.576 PM
                                Your time zone: Wednesday, 2 September 2020 7:33:24.576 AM GMT+12:00

                                And fun fact, you can set the next auto-increment number using

                                ALTER TABLE tablename AUTO_INCREMENT = 1
                                You do not have permission to view this gallery.
                                This gallery has 1 photos.
                                Posts not to be taken as official support representation - Just a helpful uploader who tinkers

                                Comment

                                Working...
                                X