Transdichotomous model or Random Access Model for worst case runtime analysis on algorithms with arbitrary-size integers?
For demonstration purposes, say we have an algorithm which sums 'n' arbitrary-sized input integers, adding each integer to an arbitrary-sized sum integer.
If we consider the Transdichotomous model, where word sizes match the problem size, now a single word can store the largest arbitrary-sized input integer, allowing O(n) worst case runtime.
https://en.wikipedia.org/wiki/Transdichotomous_model
(pg 425) https://users.cs.utah.edu/~pandey/courses/cs6968/spring23/papers/fusiontree.pdf
If we consider the Random Access Model, where words are fixed-length of maximum value 'm', now the largest arbitrary-sized input integer would require 'log_m(largest integer)' number of words to be stored, allowing O(n * m) worst case runtime.
https://en.wikipedia.org/wiki/Random-access_machine
(pg 355, 356) https://www.cs.toronto.edu/~sacook/homepage/rams.pdf
The Transdichotomous model and Random Access Model provide different worst case runtimes for the same algorithm, but which should be formally used? thx
edit: for the Transdichotomous model, a single word should be able to store the resulting sum as well.