#23024 apparently took the runtime of Meta/test_pdf.py ~/Downloads/0000
from 1 minute to 5 minutes.
I don't pay attention for one second and someone (also me) makes things so much slower, smh.
#23050 and #23072 took it down to 4.5 minutes, which is better, but still annoyingly slow.
It's not quite as terrible as it looks at first sight: Not testing just 0000364.pdf reduces the time to 1m30s already. That file is 579 pages and every page now takes a bit over 0.5s to render, so that's most of the 5 minutes.
Removing 0000711.pdf brings it to 1m22s, 0000849.pdf (of #22700 (comment) fame) to 1m20s, 0000943.pdf to 1m16s.
Need to make the CLUT code much faster somehow.
In the meantime, this makes testing of unrelated PDF changes faster (but less complete):
% git diff
diff --git a/Meta/test_pdf.py b/Meta/test_pdf.py
index 97ec554a49..ddc683b394 100755
--- a/Meta/test_pdf.py
+++ b/Meta/test_pdf.py
@@ -63,6 +63,20 @@ def main():
files = []
for input_directory in args.input:
files += glob.glob(os.path.join(input_directory, '*.pdf'))
+
+# 4.5m
+# without 0000364: 1m30s
+# without 0000711: 1m22s
+# without 0000849: 1m20s
+# without 0000943: 1m16s
+
+ files = [f for f in files if
+ '0000364.pdf' not in f and
+ '0000711.pdf' not in f and
+ '0000849.pdf' not in f and
+ '0000943.pdf' not in f
+ ]
+
if args.n is not None:
random.seed(42)
files = random.sample(files, k=args.n)
Note to self: Fine to patch in while iterating on something, but remember to remove it when testing before making a PR.
Pay now to fund the work behind this issue.
Get updates on progress being made.
Maintainer is rewarded once the issue is completed.
You're funding impactful open source efforts
You want to contribute to this effort
You want to get funding like this too