The underlying assumption is the architecture provides instructions that operate as numtype x numtype -> numtype, with int being the smallest possible numtype. To execute a + b it must be mapped onto one of these instructions. For example, char + short would get mapped onto the int x int -> int instruction. The usual arithmetic conversions formalize the algorithm for doing this.
If you ignore signedness, the rule is extremely simple and natural: numtype = max(type(a), type(b), int), where char < short < int < long < long long < float < double < long double.
If you ignore signedness, the rule is extremely simple and natural: numtype = max(type(a), type(b), int), where char < short < int < long < long long < float < double < long double.
Signedness throws a wrench into the works though.