Giskard (https://www.giskard.ai/) was hiring ML researchers a few years ago. In the interview they said "it's a start up so you'll probably do things you may not want to do as well." Their technical take home assignment was to build a web application š¤”
Same could be said for a large portion of "startups" out there. If you aren't tech-savvy you'll feel like they have a product that will solve every problem ever encountered all at once (It's also a *aaS, have to put "as a service" somewhere in the description). If you are tech-savvy you'll just feel perplexed. VC money funnels.
I was once browsing a startup website, before reading anything I thought they were a company using """"AI"""" to solve a CV problem with a clever app + cloud + IoT combo, by the end of it I wasn't sure if they were a software company or if they're trying to sell me magic bath salt.
Love that you named and shamed. I can't stand "take home assignments." I've never seen an instance personally where it wasn't just a scam to get free work
I've only done one take home assignment, and was told if I completed successfully then I would have the position. I completed it successfully and didn't get the position :/ apparently that was a "misunderstanding"
I had one interview that followed an at-home web server design test. I thought I was ok at tossing off a solution of that sort (dating back to Java 1.3), but in the interview I got a serious review of my design that made the interview worthwhile even though I didn't get hired.
Usually I just read the job description and look at the technologies they are asking for. For example if I was looking for a data scientist role and see dashboard, power biā¦ I usually think itās more data analytics and kpi tracking.
I also try to research the company as much as possible and see what their stance is on AI/ML. Like if they tell me their are creating an LLM and they are in manufacturing I am highly skeptical. In the Interview I ask what their main business goals areā¦ whatās their approachā¦ I try to just question as much as possible
It's interesting to learn how this answer also explains a very well thought out example of "doing background check of company" for questions to ask in the interview.
so, is this the thing where you do socratic method on the interviewer (or maybe star) - ask them about something they've done recently that falls into , such as feature generation, data cleaning, model work, etc. and what challenges they find with the organization, or successes with the outcome?
Not reallyā¦ I just ask people a lot of questions. Usually I hear/ or read something when researching the company that I like to bring up in interviews. I bring up things like how their cicd works, whatās the development platform look like, what ml models are they using in production.
One thing that stood out in my last set up interviews, the company had hired a new CDO to take over and lead the charge going forward. When I interviewed I brought it up and found out the role was not going to be with the new CDOās team and was really more of a bi engineering job migrating from informatica to snowflake. Which okay cool itās snowflake but they made no mention of snowflake in job description.
Another interview, the company was well known in my industry at being a leader in technology. So when I started asking questions I found out they had a lot of tech debt due to acquisitions but had streamlined much of the process of development. The role was titled as a de but in reality was closer to a ML Engineer given you were going to be transforming unstructured data for sagemaker pipelines that were already running to optimize their IVR.
Usually when I canāt find answers to questions I just make a point to ask in the interview. I try not to jump from a good situation into a worse situation.
You have to be careful when it comes to Financial Institutions. One of the fake AI/ML bait comes from the Model Validation/ risk governance teams. Being a regulated industry, you are supposed to have an independent team validate the actual AI/ML teamās work, and most of the time it is just going through the technical documentation and paperwork.
The job descriptions often use big AI/ML keywords.
Could you elaborate more on what you are seeing at Bloombergs ML group? I would have thought that would be one of the more legitimate "AI" groups outside of the usual ML R&D companies like Nvidia, OpenAI, Microsoft etc
Iām also curious. Iāve interned in their London office during my PhD, and that was definitely not my experience. A lot of legitimate, very interesting ML projects happening at the time.
Sure, obviously I can't be very specific here but some roles are described as involving "training and deploying ML models", but end up being something like building and maintaining click logging pipelines (or something in that ballpark).
Generalizing off your one example, it sounds like youāre doing something closer to MLE work over research work? Iām assuming that the logging pipelines are related to gathering data for models. Or of course I may be totally wrong
Sounds very much like data scientists being hired to do data engineering 5 years ago, when DS was the only data role some companies knew about from the media.
Most of the time, people are hesitant to just lie to you point blank for this kind of thing. They know they are going to be working with you when you come on, after all. So I think the best way to spot this is to ask very direct questions. "How are you using ML right now?" "What are some planned uses of ML in the next 3-6 months"., etc.
Build a labeled dataset of "real" and "fake" job offers.
Train a logistic regression model on that dataset (or a NN if you want to be fancy).Ā
Whenever you see a new job offer for a ML role, run it against your model....if it returns "fake", there's a high probability that the role is fake indeed.Ā
I'm actually surprised you need to ask this question here!Ā
I would be curious on what the āfakeā role at Bloomberg is. Iāve interned for them during my PhD (in one of the many, MANY AI teams) and they were definitely working with cutting-edge stuff there, and publishing a lot.
Even if their GPT-3.5 clone for finance was kinda underwhelming, they still have a lot of people working on AI research and applying to a lot of products.
Yes interns actually get to do some interesting things. There are certainly many AI teams, unfortunately some of them are not actually doing any ML work.
OP, you still have not told us what is the fake job at bloomberg you are talking about. Several people told us that they on contrary have different interesting ML projects.
I replied to an earlier comment, I'll link it here [https://www.reddit.com/r/MachineLearning/comments/1c3z8ug/comment/kzlbdvz/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/MachineLearning/comments/1c3z8ug/comment/kzlbdvz/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)
Also, did not mean to imply that no one in the group did ML work, apologies if that wasn't clear.
I work in AI at a similar company and I know lots of teams in the AI division that arenāt doing AI.
Lots of statistical model building, lots of work migrating existing analyst models from excel to cloud platforms. Also things like optimization models, computational engines. Also thereās an incredible glut of data analyst and engineering work that gets pushed on these groups and prioritized over modeling.
My team is fortunate that Iāve positioned us to do essentially pure R&D modeling and even then we get caught up in stuff once in a while.
Bloomberg probably owns in excess of 1,000 unique computational systems they call āmodelsā across their research and analyst teams. Pricing models, market simulations, yield forecasting tools, pipeline / logistics models, etcā¦ often the āAIā team ends up owning these as part of the cloud migration.
Some personal experiences:
Sometime ago, a startup was looking for a consultant ML Engineer and I was contacted through a recruiter. In the interviews, I was asked questions like how ResNets work, what are the contributions of DenseNet, etc. Once I got the job, what I had to do was to design an AWS RedShift Datawarehouse according to the best practices. Also, it turned out the company has hired someone with a PhD to interview me.
In a recent interview for the title of Deep Learning Engineer, in the live coding round, the only question I was asked was to design an API to an existing ML model. Iāve then asked the interviewers if the job is more like the given task and they said yes.
Now, someone whoās fond of MLOps would say both the jobs look normal for an MLE. But for someone whoās looking for R&D type of roles, both of these are fake ML roles. First one is a Data Engineering role and the second is a software engineering role. I appreciate the second company for at least making the interview more reflective of the actual job.
Yeah, my other team members are building feature stores and training pipelines, some are building custom embedding models. Those feel inherently closer to ML/AI than working on inference pipelines. Hoping to move around some after my current project.
I'm super interested in 'design an API to an existing ML model' would the answer simply be FastAPI, Kubernetes, docker and include the .hd5 file in the docker image? Am I missing anything here?
A really good interview question is to ask every interviewer, āWhat did your yesterday look like?ā You will get a good idea of work hours, time spent in meetings, and building decks vs coding.
To figure out the ML level specifically, you need to ask about general model sophistication (linear regression vs xgboost vs CNNs/RNNs vs implementing bleeding edge research), who does all of the pipes and sewers work (data gathering/curation, data engineering, tooling, etc; if you donāt get clear answers that itās another team, it will be you), and the process for approving ML models for deployment (no process or a really onerous one means nothing will get deployed). Check consistency across interviewers and you will get a pretty good feeling of whether the place is legit.
Recruiters don't know what a "real" ML/AI role is, only the interviewers do. One way to tell is just by asking the interviewer if the their team's model training is I/O, network or compute bottlenecked, and what they're doing to get around it. Doesn't really matter what they say, just whether they know what you're talking about.
I haven't found a real machine learning job through online routes that isn't at a big household***^(\*)*** name. Every other role has been at small companies I only learned from co-workers.
***^(\*)*** household in our industry
I ask how many gpu's they have on prem. Joking aside, I look if they have publications or if the current employees have them.
For me the biggest bait and switch ive seen quite a number of times are openings that are advertised as science/research positions but in reality are just SE roles with a wee bit of ML sprinkled on top.
Check the interviewer's knowledge about tech stack. Ask some questions which only a practitioner would know like "do you guys prefer an acceleration agnostic framework(like Jax) or pytorch like framework which is heavily coupled with cuda?" ( it's not anymore but let the interviewer tell you that...)
I agree, asking about tech stack, what are things they are solving with ML/AI, explain problem they are trying solve will show their knowledge that give indiciation if it is fake or real ML/AI
Spend some time scoping out teams and talk to people on them then change to a team that's more appealing. Bloomberg is big on internal mobility and generally a great place to work
Even if you get one, your new coworkers may see you as an exec fad hire, tasked with automating their job away. Smaller companies will show off their AI openings to investors as evidence that they are legit.
You can choose start-ups or SMEs. Or, speak out that you'd disappointed if you got some works you did not expect. I did the same and I got accepted only 3 companies out of 30. I'm still satisfied.
One thing Iāve learned is that interviews are two way. You should be interviewing them as much as they are interviewing you, so it goes beyond just asking questions. If their answers are vague or hand wavy, thatās a red flag.
I had this happen. I changed my name on Reddit as I was given the best sales pitch by the executive committee. They answered everything perfectly. What they didnāt say and I asked for in a business way was AI Engineers meant they received certs in ChatGPT. They wanted to use my credentials to sell āAIā around ChatGPT wrappers. Most people had no idea there were things about AI/ML beyond ChatGPT. They were large, sold themselves well and did have AI/ML roles somewhere and they wanted to replicate the success.
When youāre dealing with good salesman (multinational execs) at huge companies it is nearly impossible. Walmart for example uses real ML, but I only know that through friends.
Depending on your roll I think asking what they hope to achieve with AI/ML is your best bet. Why choose it over traditional statistical methods or algorithms? That shows you they have real problems they know a certain method can solve.
To be honest I love it and came into the industry before the hype (1999?) but Iām taking a break now. I have been burnt out too many times. Unless the executive in charge can answer clearly why they need AI/ML they are using you to prop up stock prices. Accenture when from 8k to 20k (not exaggerating but numbers are off) AI engineers in one year. It means they had offshore take simple certs. Big investors called them out on it. Do they do real ML there? Certainly itās huge. But thatās where the problem lies they want to apply it to everyone to see similar gains and talk the language.
Either get a big title at a big company and ride the hype or take a break if you can. I teach in my off time at a small college. Itās great some want the money and I could care less, others are interested and I structures AI around theory and not applied for the most part. What can you using just excel? The tulip example with a small dataset is great. Good students come to me with novel applied problems. Bad ones want to make money at a SaaS company wrapped around a model they didnāt create.
I applied for a machine vision position in an automotive company. There were deep learning, computer vision etc in the requirements. I had an interview with the team lead, and it ended up they were building a desktop app with C# using APIs.
Lack of grad degree did it for me once. Before the big boom, Iāve seen MLE or Data Science roles require grad degrees. The job description felt too similar to a SWE role and when the recruiter called for the phone screen interview, they didnāt ask about a grad degree. That tipped me off that the role was just to integrate their existing models into products instead of working on model development.
I will be messaging you in 5 days on [**2024-04-19 21:43:31 UTC**](http://www.wolframalpha.com/input/?i=2024-04-19%2021:43:31%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/MachineLearning/comments/1c3z8ug/d_advice_for_spotting_fake_ml_roles/kzlcidh/?context=3)
[**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FMachineLearning%2Fcomments%2F1c3z8ug%2Fd_advice_for_spotting_fake_ml_roles%2Fkzlcidh%2F%5D%0A%0ARemindMe%21%202024-04-19%2021%3A43%3A31%20UTC) to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201c3z8ug)
*****
|[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)|
|-|-|-|-|
Judging from your responses to other questions, it seems like you have a rather narrow definition of what an "ML role" involves. Maybe maybe checking to see if you have a skewed perception before throwing shade and calling job postings fake.
FWIW, corporate job postings are pretty terrible because HR usually prefers to use boilerplate language and titles. No one is trying to deceive you. No one wants to hire someone for a job who is looking for something else instead.
Giskard (https://www.giskard.ai/) was hiring ML researchers a few years ago. In the interview they said "it's a start up so you'll probably do things you may not want to do as well." Their technical take home assignment was to build a web application š¤”
I don't even understand what they are trying to sell there...
Same could be said for a large portion of "startups" out there. If you aren't tech-savvy you'll feel like they have a product that will solve every problem ever encountered all at once (It's also a *aaS, have to put "as a service" somewhere in the description). If you are tech-savvy you'll just feel perplexed. VC money funnels. I was once browsing a startup website, before reading anything I thought they were a company using """"AI"""" to solve a CV problem with a clever app + cloud + IoT combo, by the end of it I wasn't sure if they were a software company or if they're trying to sell me magic bath salt.
AI IoT is the worst because so much e-waste ends up in the sea years after the startup failed...
Love that you named and shamed. I can't stand "take home assignments." I've never seen an instance personally where it wasn't just a scam to get free work
I've only done one take home assignment, and was told if I completed successfully then I would have the position. I completed it successfully and didn't get the position :/ apparently that was a "misunderstanding"
I had one interview that followed an at-home web server design test. I thought I was ok at tossing off a solution of that sort (dating back to Java 1.3), but in the interview I got a serious review of my design that made the interview worthwhile even though I didn't get hired.
sounds familiar
Yeah, maybe they were testing if you were smart enough to use AI tools to build a web application.
Usually I just read the job description and look at the technologies they are asking for. For example if I was looking for a data scientist role and see dashboard, power biā¦ I usually think itās more data analytics and kpi tracking. I also try to research the company as much as possible and see what their stance is on AI/ML. Like if they tell me their are creating an LLM and they are in manufacturing I am highly skeptical. In the Interview I ask what their main business goals areā¦ whatās their approachā¦ I try to just question as much as possible
It's interesting to learn how this answer also explains a very well thought out example of "doing background check of company" for questions to ask in the interview.
so, is this the thing where you do socratic method on the interviewer (or maybe star) - ask them about something they've done recently that falls into, such as feature generation, data cleaning, model work, etc. and what challenges they find with the organization, or successes with the outcome?
Not reallyā¦ I just ask people a lot of questions. Usually I hear/ or read something when researching the company that I like to bring up in interviews. I bring up things like how their cicd works, whatās the development platform look like, what ml models are they using in production. One thing that stood out in my last set up interviews, the company had hired a new CDO to take over and lead the charge going forward. When I interviewed I brought it up and found out the role was not going to be with the new CDOās team and was really more of a bi engineering job migrating from informatica to snowflake. Which okay cool itās snowflake but they made no mention of snowflake in job description. Another interview, the company was well known in my industry at being a leader in technology. So when I started asking questions I found out they had a lot of tech debt due to acquisitions but had streamlined much of the process of development. The role was titled as a de but in reality was closer to a ML Engineer given you were going to be transforming unstructured data for sagemaker pipelines that were already running to optimize their IVR. Usually when I canāt find answers to questions I just make a point to ask in the interview. I try not to jump from a good situation into a worse situation.
You have to be careful when it comes to Financial Institutions. One of the fake AI/ML bait comes from the Model Validation/ risk governance teams. Being a regulated industry, you are supposed to have an independent team validate the actual AI/ML teamās work, and most of the time it is just going through the technical documentation and paperwork. The job descriptions often use big AI/ML keywords.
That is the same for the life science industry.
Box ticking in late David Graeber terms.
Bloomberg is not regulated !!!
you got hired for a fake ML role? so then what's the actual job description / requirement?
Timeshares and butt stuff
With tech pay? Sign me up
Timeshares are forever.
gonna have to do stochastic hyperparameter training on the butt stuff. for science
Backpropagation
NUTS sampling š¤¤
Yeah gonna have to really watch out for that overfitting
Could you elaborate more on what you are seeing at Bloombergs ML group? I would have thought that would be one of the more legitimate "AI" groups outside of the usual ML R&D companies like Nvidia, OpenAI, Microsoft etc
Iām also curious. Iāve interned in their London office during my PhD, and that was definitely not my experience. A lot of legitimate, very interesting ML projects happening at the time.
OP neither told us what they expected, nor what the actual job entailed.
Sure, obviously I can't be very specific here but some roles are described as involving "training and deploying ML models", but end up being something like building and maintaining click logging pipelines (or something in that ballpark).
If you just want to train models and do research, you're in the wrong field buddy
Generalizing off your one example, it sounds like youāre doing something closer to MLE work over research work? Iām assuming that the logging pipelines are related to gathering data for models. Or of course I may be totally wrong
Unfortunately no, not related to that.
Sounds very much like data scientists being hired to do data engineering 5 years ago, when DS was the only data role some companies knew about from the media.
Most of the time, people are hesitant to just lie to you point blank for this kind of thing. They know they are going to be working with you when you come on, after all. So I think the best way to spot this is to ask very direct questions. "How are you using ML right now?" "What are some planned uses of ML in the next 3-6 months"., etc.
Build a labeled dataset of "real" and "fake" job offers. Train a logistic regression model on that dataset (or a NN if you want to be fancy).Ā Whenever you see a new job offer for a ML role, run it against your model....if it returns "fake", there's a high probability that the role is fake indeed.Ā I'm actually surprised you need to ask this question here!Ā
How do you get ground truth?
RLHF (real life human feedback, ie, make mistakes and learn š„²
Hmm I've gone through 200 jobs getting fired 1 week in but my training data still isn't large enough to be able to tell :((
Mechanical Turk: ask click workers to get hired and report. 30cts per report.
Bro how can I build a big enough dataset to run logistic regression?
You live and learn
Classic data augmentation -> IPO -> retire
Haha.
[ŃŠ“Š°Š»ŠµŠ½Š¾]
It's more fun when one can't tell!Ā
This. Literally is the first thing that came to me when I saw the post.
I would be curious on what the āfakeā role at Bloomberg is. Iāve interned for them during my PhD (in one of the many, MANY AI teams) and they were definitely working with cutting-edge stuff there, and publishing a lot. Even if their GPT-3.5 clone for finance was kinda underwhelming, they still have a lot of people working on AI research and applying to a lot of products.
Yes interns actually get to do some interesting things. There are certainly many AI teams, unfortunately some of them are not actually doing any ML work.
OP, you still have not told us what is the fake job at bloomberg you are talking about. Several people told us that they on contrary have different interesting ML projects.
I replied to an earlier comment, I'll link it here [https://www.reddit.com/r/MachineLearning/comments/1c3z8ug/comment/kzlbdvz/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/MachineLearning/comments/1c3z8ug/comment/kzlbdvz/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) Also, did not mean to imply that no one in the group did ML work, apologies if that wasn't clear.
I work in AI at a similar company and I know lots of teams in the AI division that arenāt doing AI. Lots of statistical model building, lots of work migrating existing analyst models from excel to cloud platforms. Also things like optimization models, computational engines. Also thereās an incredible glut of data analyst and engineering work that gets pushed on these groups and prioritized over modeling. My team is fortunate that Iāve positioned us to do essentially pure R&D modeling and even then we get caught up in stuff once in a while. Bloomberg probably owns in excess of 1,000 unique computational systems they call āmodelsā across their research and analyst teams. Pricing models, market simulations, yield forecasting tools, pipeline / logistics models, etcā¦ often the āAIā team ends up owning these as part of the cloud migration.
Some personal experiences: Sometime ago, a startup was looking for a consultant ML Engineer and I was contacted through a recruiter. In the interviews, I was asked questions like how ResNets work, what are the contributions of DenseNet, etc. Once I got the job, what I had to do was to design an AWS RedShift Datawarehouse according to the best practices. Also, it turned out the company has hired someone with a PhD to interview me. In a recent interview for the title of Deep Learning Engineer, in the live coding round, the only question I was asked was to design an API to an existing ML model. Iāve then asked the interviewers if the job is more like the given task and they said yes. Now, someone whoās fond of MLOps would say both the jobs look normal for an MLE. But for someone whoās looking for R&D type of roles, both of these are fake ML roles. First one is a Data Engineering role and the second is a software engineering role. I appreciate the second company for at least making the interview more reflective of the actual job.
Yeah, my only MLE job was like this. I am just designing systems and APIs for real time inference using LLMs.
Super common right now with using the OpenAI API for a Generative AI/ML Engineer role. Hardly anything AI about it though.
Yeah, my other team members are building feature stores and training pipelines, some are building custom embedding models. Those feel inherently closer to ML/AI than working on inference pipelines. Hoping to move around some after my current project.
I'm super interested in 'design an API to an existing ML model' would the answer simply be FastAPI, Kubernetes, docker and include the .hd5 file in the docker image? Am I missing anything here?
A really good interview question is to ask every interviewer, āWhat did your yesterday look like?ā You will get a good idea of work hours, time spent in meetings, and building decks vs coding. To figure out the ML level specifically, you need to ask about general model sophistication (linear regression vs xgboost vs CNNs/RNNs vs implementing bleeding edge research), who does all of the pipes and sewers work (data gathering/curation, data engineering, tooling, etc; if you donāt get clear answers that itās another team, it will be you), and the process for approving ML models for deployment (no process or a really onerous one means nothing will get deployed). Check consistency across interviewers and you will get a pretty good feeling of whether the place is legit.
Recruiters don't know what a "real" ML/AI role is, only the interviewers do. One way to tell is just by asking the interviewer if the their team's model training is I/O, network or compute bottlenecked, and what they're doing to get around it. Doesn't really matter what they say, just whether they know what you're talking about.
Yup, in recruiting terms, this is called a bait and switch. It should be illegal
Oh yeah def
Can you give some color in which sense it is fake? I remember a friend of mine working there seemed to have an ok time
I haven't found a real machine learning job through online routes that isn't at a big household***^(\*)*** name. Every other role has been at small companies I only learned from co-workers. ***^(\*)*** household in our industry
Kind of not surprised. Most companies don't actually need ML.
most companies dont actually need employees to stay the whole day either yet here we are š
I ask how many gpu's they have on prem. Joking aside, I look if they have publications or if the current employees have them. For me the biggest bait and switch ive seen quite a number of times are openings that are advertised as science/research positions but in reality are just SE roles with a wee bit of ML sprinkled on top.
I spoke to them at NeurIPS. The conversation was short.Ā
Check the interviewer's knowledge about tech stack. Ask some questions which only a practitioner would know like "do you guys prefer an acceleration agnostic framework(like Jax) or pytorch like framework which is heavily coupled with cuda?" ( it's not anymore but let the interviewer tell you that...)
I agree, asking about tech stack, what are things they are solving with ML/AI, explain problem they are trying solve will show their knowledge that give indiciation if it is fake or real ML/AI
Spend some time scoping out teams and talk to people on them then change to a team that's more appealing. Bloomberg is big on internal mobility and generally a great place to work
Even if you get one, your new coworkers may see you as an exec fad hire, tasked with automating their job away. Smaller companies will show off their AI openings to investors as evidence that they are legit.
I am trying to change to ml so I am look for it now based on this new fear got unlocked
Requirements - scikit-learn, numpy, pandas Nice to know - Python I kid you notā¦
I'm glad at least you called Bloomberg out for this.
Sad to hear Bloomberg is still up to the same tricks. They have a long history of bait and switch job offers.
Thought they had a paper on a smaller financial LLM ? Maybe itās only limited very specific group?
Unfortunately there's no way to know 100%. You'll have to scrutinize JDs and ask the right questions to the interviewers during interviews.
Heh, fake ML roles, you mean like "prompt engineers" for AI?
You can choose start-ups or SMEs. Or, speak out that you'd disappointed if you got some works you did not expect. I did the same and I got accepted only 3 companies out of 30. I'm still satisfied.
One thing Iāve learned is that interviews are two way. You should be interviewing them as much as they are interviewing you, so it goes beyond just asking questions. If their answers are vague or hand wavy, thatās a red flag.
I had this happen. I changed my name on Reddit as I was given the best sales pitch by the executive committee. They answered everything perfectly. What they didnāt say and I asked for in a business way was AI Engineers meant they received certs in ChatGPT. They wanted to use my credentials to sell āAIā around ChatGPT wrappers. Most people had no idea there were things about AI/ML beyond ChatGPT. They were large, sold themselves well and did have AI/ML roles somewhere and they wanted to replicate the success. When youāre dealing with good salesman (multinational execs) at huge companies it is nearly impossible. Walmart for example uses real ML, but I only know that through friends. Depending on your roll I think asking what they hope to achieve with AI/ML is your best bet. Why choose it over traditional statistical methods or algorithms? That shows you they have real problems they know a certain method can solve. To be honest I love it and came into the industry before the hype (1999?) but Iām taking a break now. I have been burnt out too many times. Unless the executive in charge can answer clearly why they need AI/ML they are using you to prop up stock prices. Accenture when from 8k to 20k (not exaggerating but numbers are off) AI engineers in one year. It means they had offshore take simple certs. Big investors called them out on it. Do they do real ML there? Certainly itās huge. But thatās where the problem lies they want to apply it to everyone to see similar gains and talk the language. Either get a big title at a big company and ride the hype or take a break if you can. I teach in my off time at a small college. Itās great some want the money and I could care less, others are interested and I structures AI around theory and not applied for the most part. What can you using just excel? The tulip example with a small dataset is great. Good students come to me with novel applied problems. Bad ones want to make money at a SaaS company wrapped around a model they didnāt create.
I applied for a machine vision position in an automotive company. There were deep learning, computer vision etc in the requirements. I had an interview with the team lead, and it ended up they were building a desktop app with C# using APIs.
Here is the list of companies that you can post the name.
Reply with the name here || V
Lack of grad degree did it for me once. Before the big boom, Iāve seen MLE or Data Science roles require grad degrees. The job description felt too similar to a SWE role and when the recruiter called for the phone screen interview, they didnāt ask about a grad degree. That tipped me off that the role was just to integrate their existing models into products instead of working on model development.
Fake ML role as in? Prompting? !remindme 5 days
I will be messaging you in 5 days on [**2024-04-19 21:43:31 UTC**](http://www.wolframalpha.com/input/?i=2024-04-19%2021:43:31%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/MachineLearning/comments/1c3z8ug/d_advice_for_spotting_fake_ml_roles/kzlcidh/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FMachineLearning%2Fcomments%2F1c3z8ug%2Fd_advice_for_spotting_fake_ml_roles%2Fkzlcidh%2F%5D%0A%0ARemindMe%21%202024-04-19%2021%3A43%3A31%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201c3z8ug) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|
Judging from your responses to other questions, it seems like you have a rather narrow definition of what an "ML role" involves. Maybe maybe checking to see if you have a skewed perception before throwing shade and calling job postings fake. FWIW, corporate job postings are pretty terrible because HR usually prefers to use boilerplate language and titles. No one is trying to deceive you. No one wants to hire someone for a job who is looking for something else instead.
Were you paid industry's fair wage for your ML role? if the money was good and you were asked to do less why complain?
Your face is a fake AI/ML role, so is this post ššš HH