Is there a (technical or practical) limit to how large you can configure the maximum number of open files in Linux? Are there some adverse effects if you configure it to a very large number (say 1-100M)?
I’m thinking server usage here, not embedded systems. Programs using huge amounts of open files can of course eat memory and be slow, but I’m interested in adverse effects if the limit is configured much larger than necessary (e.g. memory consumed by just the configuration).