A standard was developed because of all the different sensitivity specifications.
The issue largely goes back to the two major competing high fidelity power amplifier designs of the 50s: The original Williamson amplifier required about 2.0 vac RMS to drive it to full power output. On the other hand, the Mullard design only required about 0.5 vac RMS, and often less. Different manufacturers championed different basic designs: Heath patterned their equipment after the Williamson circuit, while Eico went after the Mullard design, as but two examples. Other manufacturers used a marrying of these two circuits and sensitivity levels as they thought most appropriate.
With sensitivity levels from all the various designs all over the map, most preamps (but not all) were designed to be able to handle a worst case situation: The Williamson amplifier. That's why most of the better preamps of the day utilized about 20 db of line stage gain (a gain of 10X), so that they would work for those owners of Williamson amplifiers. But that level of gain was waaaay to much for (say) an Eico power amplifier, so Eico (as but one example) included level controls on their power amps to be able to match the gain of the preamp with that which the power amp offered. On the other end of the scale, Leak preamps at best had unity gain (or less), requiring all the sensitivity the Eico power amps had. Other manufacturers like Dynaco tried to split things down the middle, employing a power amp sensitivity that would generally work well with most preamps (except the Leak), without the need for level controls, with their preamp then working well with most power amps, whether they had level controls or not.
The end game (at the time) was really an effort to have the volume control operate with the setting range so that:
1. On the one hand, it wasn't too touchy to operate (or operating down in the weeds), and
2. For most sources, the loudness control function could work properly at lower volume settings.
Other issues such as speaker sensitivity and power amplifier power output capability all play into the issue as well. For a given power level, 80 db speakers will provide a soft sound level, where 100 db speakers with the same amount of power applied will in fact be loud. And, a 75 watt amplifier with a .5 volt sensitivity inherently has a heck of a lot more gain built into it than a 10 watt amplifier does with the same sensitivity level (about 275% more gain). So, the issue was all over the place, with a standard being developed as a result.
For me, I dislike power amplifier level controls due to the frequency response error they most often introduce, and so prefer a sliding sensitivity level, with lower power amplifiers of about 20 watts capability displaying a sensitivity that requires about 1.3 vac RMS to developed full power output, while bigger amplifiers of 70 watts or so require 2.25 vac RMS to develop full power output. Such an approach has both examples of these power amplifiers producing about the same sound level with all else being equal in a given setup for a given volume control setting, but allows the high power amp to do its thing beyond the capability of the lower powered amp, after it's runs out of gas. And, a sensitivity level of over 1 volt also helps to minimize interconnect noise and related issues as well.
My 2¢ worth.
Dave