As the "proper solution" here is of course not using PDFs that are hard-to-parse, but force elections to have machine parseable outputs. And LLMs can "fix in place" stupid solutions.
That's not a hate on the author though. I needed to do some PDF parsing for bank statements before; but also; the proper long-term solution is force banks (by law or by public interest) to have parseable statements, not parse it!
Like putting LLMs to understand bad codebase will not fix the bad codebase, but will build on top of it.
Totally reasonable view, and one of our volunteers actually got the law in Kansas changed to mandate electronic publishing of statewide precinct results in a structured format! But finding legislative champions for this issue isn't easy.
I’ve tried using LLM’s to do the same exact thing (turning precinct-level election results into a spreadsheet) and in my experience they worked rather poorly. Less accurate than traditional OCR, and considering how many fixes I had to make, altogether slower than manual entry. The resolution of the page made an outsized difference. It’s nice that you got it to work, but I am skeptical of it as a permanent solution.
Tangentially- I appreciate what OpenElections does- however, I wish there was a similar organization that did not limit themselves to officially certified results. There are already other organizations who collect precinct results post-2016, and using only official results basically limits you to 2008 and afterwards, but historical election results are the real intrigue. Not to mention that I have noticed many blatant errors in election results that have supposedly been “certified” by a state/county government. The precinct results Pennsylvania publishes, for example, are riddled with issues.
I think that we should encourage elections to _not_ be standardized. The problems among various polities in the USA have many different issues and should not be forced to conform to a specific way that elections should be done. This is a social problem and we should not cram it into a technical solution. Legibility of elections should be maintained at the local level, trying to make things legible at a national level is in my opinion unwanted. As much as I would like the data to be clean, people are not so clean. Even if they used slightly more structured formats than PDFs, the differences between polities must be maintained as long as they are different polities.
The way that OpenElections handles this, with 'sources' and 'data' directories I think is a good way to bridge the gap.
Not being standardized is fine and even a positive (diversity of technology vendors is a security feature and increases confidence in elections). But producing machine readable outputs of some sort, instead of physical paper and PDFs, is clearly a positive as well.
How is it unwanted to have a standardized database of _results_? They're partly going to be used in a federal context, right?
We do this pretty decently in India - the results of pretty much every election run by the Election commission is updated on https://results.eci.gov.in/# and it's the same for the whole country.
Elections at the local level should be governed by the locality. I do not see the need for standards at a higher level, other than for democracy to be maintained in some fashion. External data reporting certainly need not be standardized at t̶h̶e̶ ̶l̶o̶c̶a̶l̶ [sic] a higher level.
I have had to do some bank statements to CSV conversions before and still do occasionally and https://tabula.technology/ has been invaluable for this.
In other news, any bank that does not produce a standard CSV file for their bank statements should be fined $1m per day until they do. It's ridiculous that this isn't the first option when you go to download them.
I had Gemini convert a bunch of charity forms yesterday, and the deviation was significant and problematic. Rephrasing questions, inventing new questions, changing the emphasis; it might be performing a lot better for numerical data sets, but it's rare to have one without a meaningful textual component.
I've seen similar. I wonder if traditional organizational solutions, like those employed by the US Military or IBM, might be applicable. Redundancy is one of their tools for achieving reliability from unreliable parts. Instead of asking a single LLM to perform the task, ask 10 different LLMs to perform the same task 10 different times and count them like votes.
In college (about 15 years ago) I worked for a professor who was compiling precint level results for old elections. My job was just to request the info and then do manual data entry. It was abysmally slow.
This application seems very good - but still a bit amazing that lawmakers haven't just required that all data be uploaded via csv! Even if every csv was slightly different format, it would be way easier for everyone (LLM or not).
I could be wildly off-base, but I wonder if some of these systems are airgapped, and the only way the data comes off of the closed system is via printing, to avoid someone inserting a flash drive full of malware in the guise of "copying the CSV file." Obviously there are or should be technical ways to safely extract data in a digital format, but I can see a little value in the provable safety that airgapping gives you.
One key problem is that the US has tens of thousands of local governments, and each of them get to solve problems in their own way.
Digital literacy of the kind that understands why releasing a CSV file is more valuable than a PDF is rare enough that most of them won't have someone with that level of thinking in a decision making role.
> most of them won't have someone with that level of thinking
That is an unfair take on it. Come out to the midwest and talk to some of the clerks in the small townships and counties out here. They do know the value of improved data and tech. And they know that investing in better tech can result in a little less money in the bank, which results in less gas to plow the roads, less money to pay someone to mow the ditches, which means on more car wrecked by hitting a deer. So the question is often not about CSV vs. PDF. It is about overall budget to do all the things that matter to the people of their town. Tech sometime just doesn't make the cut.
Besides, elections tend to have their own tech provided by the county or state, so there is standardization and additional help on such critical processes.
People running the smallest of government entities in this country tend to have pretty good heads on their shoulders. They get voted out pdq when they don't.
I'm not convinced by that argument. The data is clearly already in a spreadsheet of some sort already. I don't think "click export as CSV" v.s. "print out as paper and scan as PDF" is a cost decision.
This isn't meant as shade! I have full respect for people working in those roles. Knowing the difference between a CSV file and a PDF file - and understanding why there are people out there who curse the existence of PDFs and celebrate CSVs - is pretty arcane knowledge.
Also note that I blamed people in "a decision making role" - changing procedures requires buy-in from management. People in management roles are less likely to be thinking about CSVs v.s. PDFs than the people actually executing on the work.
As Derek pointed out in https://news.ycombinator.com/item?id=44320001#44322987 this may often be a vendor limitation - in which case there is a cost factor to consider, and the blame can also be shared between the vendor and the person who made the purchasing decision without understanding the difference between PDF and CSV export.
> elections tend to have their own tech provided by the county or state, so there is standardization and additional help on such critical processes.
There's fifty states and almost 4000 counties in the US, not to mention territories. Even if it was only fifty different standards, that's still an overwhelming amount of work and exactly the problem you're replying about.
To get all the states and counties using the same standard? Very impossible. That's the very crux of the tenth amendment. We don't even have consistent traffic laws from state to state.
There's lots of posts on HN for developments and companies doing OCR and Document Extraction. It's a classic CV problem but still has come a long way in the past couple years
Yeah, this is a very well-traveled road, but LLMs have made some big improvements. If you asked me (the guy who wrote the original piece linked above) what I'd use if accuracy alone was the goal, probably would be AWS Textract. But accuracy and structure? Gemini.
Don't have to bother with gerrymandering, or slick legal ways to arrest people for voting with the wrong documents. Or just good old fashioned intimidation, like making the polling place the police station or the ICE detention facility.
It's just a lot smoother process when you can simply write some software to manipulate the count.
Who's gonna check?
(No, seriously, Who's gonna check? Because you also need to layoff everyone in that department once you're in power.)
Corrupted OCR won't help you steal elections. The result counting is a different process, with well designed checks and safeguards.
The problem is that once the counts are done and have been reported a lot of places then print those results out on paper and then scan those papers into a PDF for anyone who asks for a copy!
Many jurisdictions do risk-limiting audits using the original ballots, so futzing with the results wouldn't necessarily make that easier. Also, cast vote records are public in many states - those are records of each ballot cast. So people can check.
I was thinking LLMs can be long-term regressive?
As the "proper solution" here is of course not using PDFs that are hard-to-parse, but force elections to have machine parseable outputs. And LLMs can "fix in place" stupid solutions.
That's not a hate on the author though. I needed to do some PDF parsing for bank statements before; but also; the proper long-term solution is force banks (by law or by public interest) to have parseable statements, not parse it!
Like putting LLMs to understand bad codebase will not fix the bad codebase, but will build on top of it.
oh well c'est la vie
Totally reasonable view, and one of our volunteers actually got the law in Kansas changed to mandate electronic publishing of statewide precinct results in a structured format! But finding legislative champions for this issue isn't easy.
I’ve tried using LLM’s to do the same exact thing (turning precinct-level election results into a spreadsheet) and in my experience they worked rather poorly. Less accurate than traditional OCR, and considering how many fixes I had to make, altogether slower than manual entry. The resolution of the page made an outsized difference. It’s nice that you got it to work, but I am skeptical of it as a permanent solution.
Tangentially- I appreciate what OpenElections does- however, I wish there was a similar organization that did not limit themselves to officially certified results. There are already other organizations who collect precinct results post-2016, and using only official results basically limits you to 2008 and afterwards, but historical election results are the real intrigue. Not to mention that I have noticed many blatant errors in election results that have supposedly been “certified” by a state/county government. The precinct results Pennsylvania publishes, for example, are riddled with issues.
I think that we should encourage elections to _not_ be standardized. The problems among various polities in the USA have many different issues and should not be forced to conform to a specific way that elections should be done. This is a social problem and we should not cram it into a technical solution. Legibility of elections should be maintained at the local level, trying to make things legible at a national level is in my opinion unwanted. As much as I would like the data to be clean, people are not so clean. Even if they used slightly more structured formats than PDFs, the differences between polities must be maintained as long as they are different polities.
The way that OpenElections handles this, with 'sources' and 'data' directories I think is a good way to bridge the gap.
Not being standardized is fine and even a positive (diversity of technology vendors is a security feature and increases confidence in elections). But producing machine readable outputs of some sort, instead of physical paper and PDFs, is clearly a positive as well.
How is it unwanted to have a standardized database of _results_? They're partly going to be used in a federal context, right?
We do this pretty decently in India - the results of pretty much every election run by the Election commission is updated on https://results.eci.gov.in/# and it's the same for the whole country.
Just breaking down the thought a little, we truly can't say elections shouldn't have standards, right?
Elections at the local level should be governed by the locality. I do not see the need for standards at a higher level, other than for democracy to be maintained in some fashion. External data reporting certainly need not be standardized at t̶h̶e̶ ̶l̶o̶c̶a̶l̶ [sic] a higher level.
I have had to do some bank statements to CSV conversions before and still do occasionally and https://tabula.technology/ has been invaluable for this.
In other news, any bank that does not produce a standard CSV file for their bank statements should be fined $1m per day until they do. It's ridiculous that this isn't the first option when you go to download them.
I'm not convinced.
I had Gemini convert a bunch of charity forms yesterday, and the deviation was significant and problematic. Rephrasing questions, inventing new questions, changing the emphasis; it might be performing a lot better for numerical data sets, but it's rare to have one without a meaningful textual component.
Use 2.5 Pro on ai studio, not the gemini app
I've seen similar. I wonder if traditional organizational solutions, like those employed by the US Military or IBM, might be applicable. Redundancy is one of their tools for achieving reliability from unreliable parts. Instead of asking a single LLM to perform the task, ask 10 different LLMs to perform the same task 10 different times and count them like votes.
Why complicate? One LLM works, another reflects and then a decision engine to review would be cheaper.
Did you out as much work into it as Derek did? He spent a full hour with Gemini to process the longer document.
In college (about 15 years ago) I worked for a professor who was compiling precint level results for old elections. My job was just to request the info and then do manual data entry. It was abysmally slow.
This application seems very good - but still a bit amazing that lawmakers haven't just required that all data be uploaded via csv! Even if every csv was slightly different format, it would be way easier for everyone (LLM or not).
I could be wildly off-base, but I wonder if some of these systems are airgapped, and the only way the data comes off of the closed system is via printing, to avoid someone inserting a flash drive full of malware in the guise of "copying the CSV file." Obviously there are or should be technical ways to safely extract data in a digital format, but I can see a little value in the provable safety that airgapping gives you.
In some cases that's true, but for many jurisdictions the results systems are third-party vendor platforms, too.
This is such an excellent example of a responsible and thorough application of vision LLMs to a gnarly data entry problem.
It’s also an excellent example on how lack of forced machine-readable format for gov publishing is a PITA.
If I was in power and wanted to continue said rule, I’d definitely discourage the adoption of any standardized formatting for election results.
Not, you know, for any nefarious purpose…but because what we’ve used forever was good enough for grandpappy, so it’s obviously good enough for us.
/cough
json to qr code would be a good start. PRIOR ART inb4 a troll.
You know, not ignoring the percentage column would mean you can do math checks yourself.
Related: Interesting mockups to turn X/open source Bsky into direct democratic massive "prothetic" polls in each post.
And paid polls that the author claims will replace prediction markets:
https://x.com/MelonUsks/status/1929660387995115713
Why is the original source data not available anywhere digitally?
Since it's printed it is clearly already in a database somewhere. Why can't that just be made public too.
Seems bizarre to OCR printed documents (although I am aware of many companies doing this to parse invoices, etc.)
Welcome to government data.
One key problem is that the US has tens of thousands of local governments, and each of them get to solve problems in their own way.
Digital literacy of the kind that understands why releasing a CSV file is more valuable than a PDF is rare enough that most of them won't have someone with that level of thinking in a decision making role.
> most of them won't have someone with that level of thinking
That is an unfair take on it. Come out to the midwest and talk to some of the clerks in the small townships and counties out here. They do know the value of improved data and tech. And they know that investing in better tech can result in a little less money in the bank, which results in less gas to plow the roads, less money to pay someone to mow the ditches, which means on more car wrecked by hitting a deer. So the question is often not about CSV vs. PDF. It is about overall budget to do all the things that matter to the people of their town. Tech sometime just doesn't make the cut.
Besides, elections tend to have their own tech provided by the county or state, so there is standardization and additional help on such critical processes.
People running the smallest of government entities in this country tend to have pretty good heads on their shoulders. They get voted out pdq when they don't.
I'm not convinced by that argument. The data is clearly already in a spreadsheet of some sort already. I don't think "click export as CSV" v.s. "print out as paper and scan as PDF" is a cost decision.
This isn't meant as shade! I have full respect for people working in those roles. Knowing the difference between a CSV file and a PDF file - and understanding why there are people out there who curse the existence of PDFs and celebrate CSVs - is pretty arcane knowledge.
Also note that I blamed people in "a decision making role" - changing procedures requires buy-in from management. People in management roles are less likely to be thinking about CSVs v.s. PDFs than the people actually executing on the work.
As Derek pointed out in https://news.ycombinator.com/item?id=44320001#44322987 this may often be a vendor limitation - in which case there is a cost factor to consider, and the blame can also be shared between the vendor and the person who made the purchasing decision without understanding the difference between PDF and CSV export.
Shade where the sunlight should fall. Let’s be honest. Then there’s less to remember.
> elections tend to have their own tech provided by the county or state, so there is standardization and additional help on such critical processes.
There's fifty states and almost 4000 counties in the US, not to mention territories. Even if it was only fifty different standards, that's still an overwhelming amount of work and exactly the problem you're replying about.
Is it so impossible to expect compliance from government entities?
To get all the states and counties using the same standard? Very impossible. That's the very crux of the tenth amendment. We don't even have consistent traffic laws from state to state.
Very interesting! Is this the state of the art for accurate OCR of tabular PDFs, or is there other work in the space to compare against?
There's lots of posts on HN for developments and companies doing OCR and Document Extraction. It's a classic CV problem but still has come a long way in the past couple years
Yeah, this is a very well-traveled road, but LLMs have made some big improvements. If you asked me (the guy who wrote the original piece linked above) what I'd use if accuracy alone was the goal, probably would be AWS Textract. But accuracy and structure? Gemini.
I wonder how difficult it would be to bias a model so that it subtly corrupts election results when performing OCR.
Sounds like an IOCCC challenge (but a much bigger haystack in which to hide the hack).
Surely not hard but why?
Easier to steal elections?
Don't have to bother with gerrymandering, or slick legal ways to arrest people for voting with the wrong documents. Or just good old fashioned intimidation, like making the polling place the police station or the ICE detention facility.
It's just a lot smoother process when you can simply write some software to manipulate the count.
Who's gonna check?
(No, seriously, Who's gonna check? Because you also need to layoff everyone in that department once you're in power.)
Corrupted OCR won't help you steal elections. The result counting is a different process, with well designed checks and safeguards.
The problem is that once the counts are done and have been reported a lot of places then print those results out on paper and then scan those papers into a PDF for anyone who asks for a copy!
Many jurisdictions do risk-limiting audits using the original ballots, so futzing with the results wouldn't necessarily make that easier. Also, cast vote records are public in many states - those are records of each ballot cast. So people can check.
I think you mean risk limiting, right?
Yes, thanks! Fixed.
Freudian Slip?
You may consider reading about risk limiting audits. https://www.voting.works/audits