LinkedIn will also know if a user is navigating inhumanly fast or at an inhumanly consistent pace.
If a user is not moving their mouse, LinkedIn will know. The default headers for whatever library you are using are proudly telling LinkedIn that you are not using a browser and thus aren't a user. That's probably why your requests are failing in the first place. You can send a request from the command line, but say that it's coming from Chrome on a 96 core Linux machine. Obviously, if your bot is using a header that says 'User-Agent': 'okhttp/3.5.0', LinkedIn is going to be on high alert. For instance, it may include the browser's name, system's core count, the OS, some cookies, a session ID, etc. They mainly exist to stop weak bots, or make it too computationally expensive to justify capable bots. Captcha tests can be easily beaten by bots these days.
Bot Tests: these are captchas that can only be broken via computer vision.I'm guessing LinkedIn uses at least a few major strategies to prevent scrapers from enriching competitors. Any information scraped can be used to compete with LinkedIn premium.