I’ve a full node, and have written some restricted python applications with bitcoinrpc. I need to begin doing barely extra bold issues now, e.g. visualisations throughout the complete blockchain (presumably analysing all transactions) and must slice and cube the information in ways in which make me instinctively need to have it queryable in a db (e.g. bigquery). Then once more there’s already over 800m transactions and so the efficiency of querying a desk of transactions won’t be any higher than querying through bitcoinrpc.
I suppose my query is whether or not anybody who’s made related choices previously has any guiding rules for fascinated about when it is sensible to do dump massive elements of the blockchain into your individual db as a substitute of question it instantly through your node? E.g. I am assuming the bitcoin api providers (e.g. blockcypher) aren’t simply operating a bunch of full nodes and querying them instantly when new requests are available however as a substitute have structured the information in a manner that makes that rather more environment friendly? A associated query I’ve is whether or not Bitcoin was even designed with maximally-efficient-blockchain-querying in thoughts?