1
0
Fork 0
mirror of https://github.com/systemed/tilemaker synced 2025-02-22 14:54:08 +01:00
tilemaker/include/external/README_sqlite_cpp.md
Colin Dellow 8300b0cdd9
Alternate node store (#590)
* refactor NodeStore

I'd like to add an alternative NodeStore that can be used when the
`Type_then_ID` property is present in the PBF.

First, a small (?) refactor:

- make `NodeStore` an interface, with two concrete implementations
- extract the NodeStore related things to their own files
- this will cause some churn, as they'll depend on things that also
  need to get extracted to their own files. Short term pain, hopefully
  long term gain in faster compile times.

Changing the invocations of the functions to be virtual may have impact
on performance. Will need to revisit that before committing to virtual
methods.

* change how work is assigned for ReadPhase::Nodes

Currently, when a worker needs work, it gets the next unprocessed block.
This means blocks are read sequentially at a global level, but from
the perspective of each worker, there are gaps in the blocks they see.

For nodes, we'd prefer to give each worker thread contiguous blocks
from the underlying PBF. This will enable a more efficient storage
for PBFs with the `Sort.Type_then_ID` flag.

* add SortedNodeStore

SortedNodeStore is uesful for PBFs with the `Sort.Type_then_ID`
property, e.g. the planet and Geofabrik exports.

It stores nodes in a hierarchy:

- Level 1 is groups: there are 256K groups
- Level 2 is chunks: each group has 256 chunks
- Level 3 is nodes: each chunk has 256 nodes

This allows us to store 2^34 nodes, with a fixed overhead of
only 2M -- the space required for the level 1 pointers.

Groups and chunks store their data sparsely. If a group has 7 chunks,
it only uses storage for 7 chunks.

On Great Britain's 184M node PBF, it needs ~9.13 bytes per node.

Looking up a node can be done in fixed time:

First, get some offsets:
- Group: `nodeID / 65536`
- Chunk: `(nodeID / 65536) / 256`
- Position within chunk: `nodeID % 256`

For example, Cape Chignecto Provincial Park has ID 4855703, giving:
- Group 74
- Chunk 23
- Offset 151

Group 74's chunks may be sparse. To map chunk 23 to its physical
location, each group has a 256-bit bitmask indicating which
chunks are present.

Use its physical location to get its `chunkOffset`. That allows you
to get to the `ChunkInfo` struct.

From there, do the same thing to get the node data.

This design should also let us do some interesting things down the road,
like efficiently compressing each chunk using something like delta
encoding, zigzag encoding and bit packing. Then, to avoid paying a
decompression cost, we'd likely give each worker a cache of uncompressed
chunks.

* cmake build

* tidy up

* tweak

* tweak

* derp

* mac/windows build

* fix build?

I don't understand why these can't be passed as a copy in the Windows
and Mac builds. Whatever, try passing a reference.

* fix --store

I think nested containers may not be wired up quite correctly.
Instead, manage the char* buffers directly, rather than as
`std::vector<char>`

I'll fixup the other aspects (attributing libpopcnt, picking
Sorted vs BinarySearch on the fly) later

* attribution for libpopcnt

* simplify read_pbf

All read phases use the same striding-over-batches-of-blocks approach.

This required changing how progress is reported, as block IDs are no
longer globally montonically increasing.

Rather than thread the state into ReadBlock, I just adopted 2 atomic
counters for the whole class -- the progress reporter already assumes
that it's the only thing dumping to stdout, so the purity of avoiding
class-global doesn't buy us anything.

* clear allocatedMemory

* use scale factor 16, not 8

D'oh, if you get a full group where each chunk is full, you need to be
able to express a value _ever so slightly_ larger than 65,536.

North America and Europe have examples of this.

Use a scale factor of 16, not 8. This'll mean some chunks have up to 15
wasted bytes, but it's not a huge deal. (And I have some thoughts on how
to claw it back.)

* comment out debug stats

* windows build

* derp

* use SortedNodeStore if PBFs have Sort.Type_then_ID

* add --compress-nodes

If the user passes `--compress-nodes`, we use [streamvbyte](https://github.com/lemire/streamvbyte)
to compress chunks of nodes in memory.

The impact on read time is not much:
- GB with `--compress-nodes`: 1m42s
- without: 1m35s

But the impact on memory is worthwhile, even across very different
extracts:

North America - 5.52 bytes/node vs 8.48 bytes/node
169482 groups, 18364343 chunks, 1757589784 nodes, needed 9706167278 bytes
169482 groups, 18364343 chunks, 1757589784 nodes, needed 14916095182 bytes

Great Britain - 5.97 bytes/node vs 9.25 bytes/node
163074 groups, 4871807 chunks, 184655287 nodes, needed 1104024510 bytes
163074 groups, 4871807 chunks, 184655287 nodes, needed 1708093150 bytes

Nova Scota - 5.81 bytes/node vs 8.7 bytes/node
26777 groups, 157927 chunks, 12104733 nodes, needed 70337950 bytes
26777 groups, 157927 chunks, 12104733 nodes, needed 105367598 bytes

Monaco - 10.43 bytes/node vs 13.52 bytes/node
1196 groups, 2449 chunks, 30477 nodes, needed 318114 bytes
1196 groups, 2449 chunks, 30477 nodes, needed 412258 bytes

* build

* build

* remove __restrict__ to satisfy windows build

* remove debug print, small memory optimization

* use an arena for small groups

* omit needless words

* better short-circuiting for Type-then-ID PBFs

Track metadata about which blocks have nodes, ways and relations.
By default, we assume any block may contain nodes, ways or relations.

If the PBF supports Type-then-ID PBFs, do a binary search to find the first
blocks with ways and relations.

This means ReadPhase::Nodes can stop without scanning ways/relations.
In addition to avoiding needless work, it makes it easier to assign
each worker a balanced amount of work -- now each worker has only
blocks with nodes, which are about the same effort computationally.

It also makes ReadPhase::ScanRelations faster, as it scans exactly the
blocks with relations, skipping the blocks with ways.

Similarly, ReadPhase::Ways is a bit faster, as it doesn't have to read
the blocks with relations.

For North America, this reduces the time to complete the Nodes and
RelationsScan phase from 2m30s to 1m20s.

For GB, it reduces the time from 22s to 9s.

* ReadPhase::Relations - more parallelism

When processing relations for small extracts, there are often fewer
blocks than cores.

Instead, divide the work more granularly, assigning each of the N
threads 1/Nth of the block to process.

This saves 4-5 seconds (which is cumulatively ~20% of runtime) for
the Canadian province of Nova Scotia.

* extract WayStore, BinarySearchWayStore

* stub in SortedWayStore

...it just throws a lot of exceptions at the moment.

* put SortedNodeStore in a namespace

Also replace some `#define`s with `const`s.

I'm likely going to reuse some names in SortedWayStore, so namespacing
to avoid conflicts.

* don't use SortedWayStore if LocationsOnWays present

* stub in insertLatpLons/insertNodes

* change at() to return a non mmap vector

SortedWayStore won't create mmaped vectors, so we need to return the
lowest common denominator.

This pessimizes performance of BinarySearchWayStore, since it'll have
to allocate vectors on demand.

Longer term: it might be better to return an iterator that hides the heavy
lifting.

* begin drawing the rest of the owl

* flesh out types

* add unit test framework

* naive encoding of ways

Checkpointing since I have something that works.

Future optimizations:

- when all high ints are the same, don't encode them
- compression

* more efficient if high ints are all the same

* extract mmap_allocator.cpp

This is needed to unit test the way store without dragging
in osm_store.

* progress on publishGroup

checkpointing, going to extract a populateMask(...) function

* add populateMask function

* finish publishGroup

* SortedWayStore: implement at

* pass node store into SortedWayStore

* fix alignment

* better logs

* way stores should throw std::out_of_range

This is part of the contract, client code will catch it and reject
relations that have missing ways.

* sortednodestore: throw std::out_of_range

* support way compression

* remove dead code, robust against empty ways

* implement clear()

* maybe fix windows build?

very unclear why this is needed, but we seem to be getting C2131 on this
line.

* don't use variable-length arrays on stack

Workaround for MSVC

* avoid more variable-length arrays

* make the other vectors as thread-local

* --no-compress-ways, --no-compress-nodes
2023-12-09 13:47:07 +00:00

157 lines
4.3 KiB
Markdown

sqlite modern cpp wrapper
====
This library is a lightweight modern wrapper around sqlite C api .
```c++
#include<iostream>
#include "sqlite_modern_cpp.h"
using namespace sqlite;
using namespace std;
try {
// creates a database file 'dbfile.db' if it does not exists.
database db("dbfile.db");
// executes the query and creates a 'user' table
db <<
"create table if not exists user ("
" _id integer primary key autoincrement not null,"
" age int,"
" name text,"
" weight real"
");";
// inserts a new user record.
// binds the fields to '?' .
// note that only types allowed for bindings are :
// int ,long, long long, float, double
// string , u16string
// sqlite3 only supports utf8 and utf16 strings, you should use std::string for utf8 and std::u16string for utf16.
// note that u"my text" is a utf16 string literal of type char16_t * .
db << "insert into user (age,name,weight) values (?,?,?);"
<< 20
<< u"bob" // utf16 string
<< 83.25f;
db << u"insert into user (age,name,weight) values (?,?,?);" // utf16 query string
<< 21
<< "jack"
<< 68.5;
cout << "The new record got assigned id " << db.last_insert_rowid() << endl;
// slects from user table on a condition ( age > 18 ) and executes
// the lambda for each row returned .
db << "select age,name,weight from user where age > ? ;"
<< 18
>> [&](int age, string name, double weight) {
cout << age << ' ' << name << ' ' << weight << endl;
};
// selects the count(*) from user table
// note that you can extract a single culumn single row result only to : int,long,long,float,double,string,u16string
int count = 0;
db << "select count(*) from user" >> count;
cout << "cout : " << count << endl;
// this also works and the returned value will be automatically converted to string
string str_count;
db << "select count(*) from user" >> str_count;
cout << "scount : " << str_count << endl;
}
catch (exception& e) {
cout << e.what() << endl;
}
```
Transactions
=====
You can use transactions with `begin;`, `commit;` and `rollback;` commands.
*(don't forget to put all the semicolons at the end of each query)*.
```c++
db << "begin;"; // begin a transaction ...
db << "insert into user (age,name,weight) values (?,?,?);"
<< 20
<< u"bob"
<< 83.25f;
db << "insert into user (age,name,weight) values (?,?,?);" // utf16 string
<< 21
<< u"jack"
<< 68.5;
db << "commit;"; // commit all the changes.
db << "begin;"; // begin another transaction ....
db << "insert into user (age,name,weight) values (?,?,?);" // utf16 string
<< 19
<< u"chirs"
<< 82.7;
db << "rollback;"; // cancel this transaction ...
```
Dealing with NULL values
=====
If you have databases where some rows may be null, you can use boost::optional to retain the NULL value between C++ variables and the database. Note that you must enable the boost support by including the type extension for it.
```c++
#include "sqlite_modern_cpp.h"
#include "extensions/boost_optional.h"
struct User {
long long _id;
boost::optional<int> age;
boost::optional<string> name;
boost::optional<real> weight;
};
{
User user;
user.name = "bob";
// Same database as above
database db("dbfile.db");
// Here, age and weight will be inserted as NULL in the database.
db << "insert into user (age,name,weight) values (?,?,?);"
<< user.age
<< user.name
<< user.weight;
user._id = db.last_insert_rowid();
}
{
// Here, the User instance will retain the NULL value(s) from the database.
db << "select _id,age,name,weight from user where age > ? ;"
<< 18
>> [&](long long id,
boost::optional<int> age,
boost::optional<string> name
boost::optional<real> weight) {
User user;
user._id = id;
user.age = age;
user.name = move(name);
user.weight = weight;
cout << "id=" << user._id
<< " age = " << (user.age ? to_string(*user.age) ? string("NULL"))
<< " name = " << (user.name ? *user.name : string("NULL"))
<< " weight = " << (user.weight ? to_string(*user.weight) : string(NULL))
<< endl;
};
}
```
*node: for NDK use the full path to your database file : `sqlite::database db("/data/data/com.your.package/dbfile.db")`*.
##License
MIT license - [http://www.opensource.org/licenses/mit-license.php](http://www.opensource.org/licenses/mit-license.php)