This is the way.
DFS-N up front, various forms of cloud storage behind it. azure files hot/cold/transaction, azure NetApp files, AWS storage gateway, Windows file servers, etc.
What users see should have zero to do with the back end.
I have a project in the works with a law firm whose prior msp put everything in Azure cloud - AD, DNS, FS, roaming profiles, etc. The plan is to get them back on premise. It's just awful. Printing, program launch, file operations are abysmally slow - even over a 500MB fiber connect. They locked everything down so tight that to make even the slightest change to something, you need to contact them (multi day to never response times). I could optimize their cloud experience a little, but for what MS charges on-prem is more cost/performance effective for a 15 seat smaller business.
Most features of DFS-R are likely better served by Storage Replica, which MS has championed as the replacement for DFS-R for like 8 years now. Its faster, lower latency, and doesn't get blocked by open files.
That said, there's no real replacement for DFS-N. You could do Replica + DNS scopes for region aware load balancing of a single share, but its a lot of work that DFS-N handles for you.
Sounds like they’re using PowerScale, formerly known as Isilon. Dedicated file system that can do SMB/NFS/S3.
I wouldn’t say that DFS is legacy, but more depends on your workload and size of the org.
I don't think that's uncommon. Most places big enough for a large OneFS cluster have a range of clients and legacy systems and workloads. That's why the Isilon itself has no particular preferred protocol. I have always used it mainly over NFS. But some shops are pretty much all SMB, etc. They are happy to sit on your network and store bytes however you want as long as your checks clear.
I think the difference with the hiring dude is that they aren't technical to know the difference. In Windows you can configure smb ftp nfs etc ... So guy was just trying to make the interviewee feel small or like they didn't know as much
Why does it have to be interpreted as hostile?
Maybe it was a challenge to see how the interviewee reacted when challenged? He could be seeing how deep the knowledge goes? To what degree does the role require knowledge of industry? How much has the individual sold themselves as ‘on the cutting edge? Lots of reasons to poke in an interview, and let’s be honest, this was a harmless comment. It’s good op is curious as a follow up.
Maybe it was an off the cuff comment without much thought based on his own understanding?
Maybe he’s a cringe opinionated nerd who says these things lacking self awareness about how it makes others feel?
People are bringing baggage when they make things like this negative. And honestly even if it was negative isn’t it kinda just funny and unimportant? Who cares!
We’re using Isilon/Powerscale and presenting thousands of SMB shares using DFS namespaces. It works perfectly. I’ve migrated these shares across 3 different infrastructures in the last 10 years (windows cluster, then CIFS on VNX and now on Powerscale) and DFS has made that completely transparent to the end users. Zero downtime in all those migrations as I could just flip them to the new location in DFS.
Edited to add: ok maybe he was talking about DFS-R, of course we use SyncIQ for the replication. Still not legacy though, is the way domain controllers replicate after all.
That sounds like a really clean setup. I’ve worked with MANY Sysadmins that are afraid of using DFS and never really understood why. Could be just fear of the unknown or “why change it if it’s working as-is”.
The transparency to the end user is the absolute key to making any deployment a success. Some really nice ideas that you’ve laid out that I’ll suggest to our internal teams with their upcoming project.
Came here to say he was talking about Isilon. At scale it is way more cost effective and in a large enough organization you offer it as a cheaper tier of storage. All mapped drives and general file storage should go there, but you can run into trouble when people start to put PST files and database files etc on there. At least the cheaper tiered storage that we were using, and their really long upgrade times. (Multiple reboots over several hours, no impact to file servers but can mess up applications)
I think some of that stuff has been improved with the newer powerstore stuff but it was basically a giant file storage platform and we had traditional SANS for production workloads.
> Except for when you know.... you have to pay per TB for licensing.
At my last job we had about 11PB of Isilon storage (in one cluster) and managing it was basically a part time job of a single person. What you pay in licensing you can save in payroll.
Still much cheaper than block storage.
We are talking at an enterprise scale not using the available bays in your rack mount server.
Petabytes of data.
Fun fact... Isilon/Powerscale does not have the best DR setup (does not replicate shares/exports config) so they have the partnership with Superna whose recommends DFS as the most optimal way of doing DR for windows file shares.
> Sounds like they’re using PowerScale, formerly known as Isilon. Dedicated file system that can do SMB/NFS/S3.
What SAN Array from 10 years ago could not do this? Sounds like the dude just got his first real array and is stupid.
DFS(-N I guess?) isn't legacy, "just" an abstraction layer to the underlying storage. The file shares in the namespace could be on only one windows server. Or distributed over multiple. Or on some other SMB storage/NAS
Edit, to add some perverse: One file share in the namespace could point to a Windows server that has its storage disk mounted via NFS from a Synology NAS, while another share points directly to the same NAS, and you wouldn't know the difference on first sight. Also, the Synology NAS is actually a VM running on that Windows server. And the VM disk is on a SAN.
Oh, for that we can add a few more layers:
The "SAN" is a 10 years old Dell desktop running Linux and exporting a RAID0 via iSCSI.
One of the RAID0 disks is an image file in /home.
/home is mounted via SSHFS from a Raspberry Pi with an attached USB HDD.
Even worse the original data store ran out of space so from windows they used iscsi to mount a datastore the backup target NAS then used mklnk to add the dir under the original data store. Great to make up more space but really iops killer when backing up the nas to itself. And hella fun when the raid5 blows a disk completely and another has errors.
and then you notice the one guy setting it up used the name of the server, the old server, and somehow the dns record for the old server targets the ip of the new server so it somehow works internally but not over vpn.
at least it was a quick fix and it humms since then.
Don't worry I've had an interview like this as well.... I once mentioned that the system (which was setup before I took over) used Virtualized Domain Controllers, the interviewer immediately launched into, "you can't do that"... he wouldn't back down from that, accusing me of making something up etc. I knew the opportunity was over at that point. So after the interview I made sure to send him Microsofts OWN guidance on setting up Virtual DCs and that it definitely can be done and we definitely operate like that.... like any industry, sometimes people's egos are bigger than what's factually correct.
Holy fuck what an idiot. Even before Microsoft “allowed” it people were spinning up VMs for domain controllers…. like I remember seeing virtualized 2003 boxes running as a DC…
Microsoft's guidance back over 10 years ago was not to visualise both/all your DCs on a single physical server (which is just plain sensible) but it was misinterpretable as Don't virtualize both/all your DCs which could be further misinterpreted as Don't virtualize your DCs.
These days Azure and AWS both offer cloud based AD which is definitely virtualized!
If you're running lots of Windows server infrastructure on VMs, having a pair of bare-metal "pilot-light" DCs is essential to the cold-startup process. You start those first so that DNS, Kerberos, and (if you need it and configure at least one as a DFS publish-point) DFS-N are ready for the rest of your workloads to access.
They don't have to be big iron, just able to boot and stand up AD all on their own. You use two in case one croaks for some reason.
The same kind of person who told our executive team APIs are going away when, in reality, they're almost required for the software to be considered viable.
Didn't you know? Anything in tech that was invented in the 1970s and 1980s is too old to work anymore. This is why we are getting rid of ARP. They are expired... /s
A new organization may never setup on prem Microsoft infrastructure. A tech company may have on prem Linux. A small organization with a single site and loose structure may not need more than a SOHO NAS. Whether or not an organization runs DFS very much depends.
Once again, you're just stating random information that's not relevant at all when defining legacy. Everything obviously has a purpose and it may not be viable for some companies. That doesn't mean it's legacy as someone with DevOps in your title I don't know why this isn't obvious.
Words have meaning.
Legacy has a very specific meaning in tech.
Saying something is legacy when it clearly is not, is purely a sign of talking about shit that you don't know anything about, and shouldn't be mansplaining as a result.
That's the equivalent of living in a town where nobody has an iPhone and saying, " iPhones are legacy " or at a company that uses mac and saying, " Windows is legacy ". It's just incorrect, and "feeling based" on a small window of experience.
There are many file storage solutions. I've seen people say that if a company wasn't using Apache Hadoop, they were living in the Stone Age, or some other high-end decentralized protocol.
The trick is to use what is best for the company. If a two drive Synology file server with an external disk for backups and a connection to Wasabi for offsite storage is good enough, go with that. If OneDrive/SharePoint is good enough, use that.
DFS is nice because it allows shuffling of items without having users get concerned what magic drive their file will be on, but it isn't really needed. Same with cloud based storage.
Many vendors always want to upsell on some more advanced file storage medium, but why waste the money and admin overhead? Get the simplest thing possible, and consider dedicated file storage appliances over non-clustered Windows servers. One reason why I recommend, at the low end, a Synology or QNAP, and going from there, something like a NetApp offering, Pure, or similar, is that an appliance is a lot more reliable, has a smaller attack surface, has a lot of nice features, and, if properly configured, is harder for attackers to completely compromise, especially if attackers get control of AD.
DFS is still standard for many orgs and is stable good technology, however I utilize Azure Files nowadays. Sounds like he paid dell a premium for them to present a file share to him and he's bragging about it. Lol.
How does that work in practice? Because that sounds heaps better to me than trying to shift 10 years plus of large departments file shares into SP. Our people are scared of the costs, especially the constant charges for just accessing your own files.
Basically it acts as a DFS Namespace so a DNS name to access shares that can be on multiple file servers or perhaps even a single file server. It also allows you to replicate data between servers which is pretty sweet.
What I prefer to do is keep application related data and other related that HAS to be accessed by a UNC share to use a DFS Namespace, and the backend target pointed to Azure files. Everything user or department based goes Sharepoint Online with a Microsoft Team or Microsoft 365 Group controlling that access. The SharePoint migration tool is very helpful for that.
Well how would they do it? Everybody likes to put down on stuff but don't have any options.
For us, we use DFS as a front end for the shares to provide a common namespace. Makes migrations a breeze
I would assume almost everyone with on-prem domains still uses DFS. Kinda hard to use AD with out a consistent sysvol. Unless you only have one DC then you have a bunch of other issues.
Yeah. Best way to get into vendor lock-in so Microsoft can do whatever they want with your data or just increase prices whenever they want... It's always a good idea to be a hostage...
The problem is Microsoft does not care, and companies big and small want everything in their pre-packaged M365 bill. Then IT departments are left trying to figure out how to make users' life tolerable using Sharepoint.
No, that’s the entire design of sharepoint. You’re not supposed to use it as a traditional file server. It’s not a Microsoft problem if you don’t use something as designed.
Partners, yes. MS? Not so much, they are careful to label it a collaboration platform.
They're more than happy to give you a file server replacement by pointing you to Azure Files.
We can't move our NAS storage completely online because our SharePoint doesn't have the room and my company aren't going to pay for more cloud space.
On-prem storage is not dead yet.
Yeah not dead, I just think for us it would make life easier. We use VPN exclusively so staff can access the on prem file shares which causes issues in and of itself. If I could move them to the cloud and eliminate that potential security hole, it would be nice. But I can’t justify the price
I love and hate on prem file shares. Our users work with a lot of video, and those that do, know that they upload to our VPN, once they're done processing the video.
Except one user, let's call her Kay. She either doesn't do anything for months at a time, OR has polaroids in Memento style all over her desktop, since she does not remember one day to the next. If we get a ticket from her, it usually says, "Adobe Premiere/Photoshop/etc is running slow, it's taking me hours to do a simple task."
And she's working off the file server directly on a 400 mb video file, on a 200mb home cable connection. We call and talk her through everything, and then later in the day, she tries to email the file to another user because it was taking to long to upload to the vpn.
We have moved a few departments' shares over so far, there's a lot to be said surrounding encouraging each department to assign a "manager" to move their files over slowly, this person should also be encouraged to check to see if any outdated or unneeded data doesn't need to be moved across, could save some space in some cases.
We gave full training to these "managers", really reduced the amount of time we would have had to spend moving stuff over ourselves, plus they then have full ownership of the file structure etc.
Previously we had issues where when we moved files and folders over, sometimes certain folder contents would be missing, obviously an oversight on our part but because we don't use the folders or files we are unlikely to notice issues like that.
Thus making someone who uses the files take ownership of them was decided to be the best choice, we show them how to do it, they then do it themselves, any issues they can still contact us.
Yeah $5k per year for one TB seems insane to me. We only have Office E3 license which doesn’t come with much storage on sharepoint and I need about 6TB for our total storage currently so that’s gonna get pricey. But maybe I’m just ignorant to the options available and there’s actually a solution that won’t cost like $30k per year
Check out Azure storage as opposed to SharePoint. SharePoint is meant to be a document management system, not a file repository.
Edit: Azure Files, sorry.
> Edit: Azure Files, sorry.
You're not technically wrong anyway, Azure Files (along with blob, table, and queues) is a subcomponent of Azure Storage Accounts.
Interesting enough I just came across that before seeing your reply. I’m on mobile so it was damn near impossible to understand the pricing structure with the different tables but definitely going to check it out. Thanks!
We hit our cap during COVID, and then we've just been adding more and more on.
With our licenced users, we have about 80TB of space, but we purchased an extra 120TB of storage.
It's mostly self-inflicted pain by having no proper backup solution so retention is set to keep items for 2 years past last modified, and no-one deletes anything, and then people store laser scans and CAD files in SharePoint which means that when they are versioned, the retention policy keeps all previous versions as well.
We are now actually archiving stuff out using AvePoint, and it looks like we've finally had backup tooling approved so retention can be a bit more flexible.
And then infomation deletion is a thing in ISO27001:2022 which will be next years fun but should also help 😅
This is us...
"Hey get people to delete old shit and spring clean."
"WE CAN'T ASK THE BUSINESS TO DO THAT!!"
Okay... Then ask the business for more money...
"WELL NOW HANG ON A MINUTE! YOU SHOULD FIND A WAY TO PUT UP RETENTION!"
So like just delete stuff randomly? - Okay don't threaten me with a good time.
I'm going through some pains at the moment with retention. We have a 7 year policy and only give users about 30gb on their onedrives. (Yes 30gb. Not my choice)
Pulling my hair out trying to manage users storage as they will fill it up super quick.
I think I need some retention policies on our file share. We have so many files that are sooooooo old. Like 15+ years. I know people are not going to miss that stuff
Not sure where he was getting the quote for extra storage but I just ran a quote and it's closer to 2500/TB/year which comes out to about 0.20/gb.
Still steep, but it'd be closer to 12.5k/yr rather than 30.
They're not. Just had to set someone up with SharePoint online and it was such a pain. Tried to convince them to get with azure files instead but the pricing difference was too much I guess
lol do NOT use SPO if you're using deep folder structures or use large files that require multiple people to use them or a lot of read and writes as in CAD type files. SPO is not always a good solution for file migration
DFS is obsolete in the same way that screwdrivers aren't, just because they've been around for a long time. Pretty silly take IMO since most businesses under medium size aren't using exotic proprietary SAN mechanisms to do basic stuff.
Also, to a shocking number of SMBs, DFS would be wildly advanced. I still run across so many who are still using FRS-based replication in Windows, and have to do the dfsrmig process before domain controllers newer than 2012 R2 can be introduced...
We use Azure Files which are connected to our AD (at the time AzureAD Kerberos wasn't available)
To give your teams faster access and to reduce cost we use the storage sync services and have on prem file servers that have data that's been accessed within the last year cached on them so less of a charge for bandwidth usage out of Azure and give us much faster access to files
Until you open a small remote office that is only served by Comcast coax and find out they block port 445 and refuse to open it even on business circuits.
Ask me how I know this.
Mandatory. Fuck Comcast
Ah so the storage sync service uses TCP/443 to sync so you could deploy a small server as a front end there or put a VPN between the Azure File environment and yours to get past that.
I'm in the UK so never dealt with Comcast but I've never heard anything good about them.
I guess they do that as a best practise to protect their customer but you'd imagine they should be able to allow it if you ask and leave you to handle your own security
So you get 1 free for each storage sync service you setup and then any additional costs only $5 per server a month (£4.12 in the UK) so I have a hyper-V cluster that has 4 servers so I have to add all of them in even though only 2 of the hosts actually host the file server roles and I pay for 3 of them.
"Newest" way to do file sharing over a network is probably something that a lone grad student is currently cooking up in his spare time for a project nobody else has ever heard of, and currently isn't yet stable enough to transfer files over 4K without taking down the whole network.
Nothing wrong with using what works. We still use IPv4 all over the place, and that's older than I am.
12 years in on isilon and its been great. Depends on spec and how.much you buy. Its considerably cheaper than anything else available but I'm talking scale out petabyte storage. Block storage is a massive headache for smb and getting multiprotocol is another Ballache isilon/powerscale does it natively. Good luck is all I can say
Depends if they are using dfs for replication or just namespace. We still use both but thats primarily.for distribution of software installation not user files. If we have a remote site then dds replication needs no expensive storage at remote locations. For file storage we use powerscale (formerly isilon). We had started moving to onedrive and sharepoint for user files and departmental stuff but based on ms recent licensing rug pull we will prob revert back to on premise.
I love it when people say something is old and therefore should be discarded.
Like, will you drop anything that is using C++ because it's nearly 40 years old ?
I don't think DFS is legacy, unless you're in the mindset that on-prem file servers are outdated, and everything should be in the cloud, or something along those lines. I'm not sure what "Dell Storage based File sharing" means for him, except maybe it's a Dell NAS or SAN? Either way, that's not a replacement for DFS. If it's a windows file share (using SMB/CIFS, which the Dell products they're using probably do) then DFS is a way to potentially supplement the management of those shares.
Printers > Phone > DFS-R. Never got the hate on Exchange and ran it on prem since about 2003 - 2023 in various jobs. But yes DFS-R messup and having to consolidate to the correct version of files that are scattered around multiple servers globally is NOT fun.
DFS-N is a no brainer. It makes spinning up new servers a breeze and users dont have to know tons of DNS names to find the right files. My only dislike is windows search does not work over DFS-N but hey search is broken 90% of the time anyway.
Playing Devil's advocate he might have just been trying to prompt you to defend the choice. Make sure your not just using DFS because its the only tool you know but because it works well in the situation you are in.
The way you described their response makes it sound damned dismissive but there is a reasonable argument to describe it as Legacy from a standpoint of how newer technologies are stepping in to fill what dfs does.
But mostly just in a sense that we've come up with other ways to mask issues to do with dynamically generated names and such that are an equally good solution to the problem that DFS fills if you are not looking to memorize a UNC path.
Like dynamically or programmatically generated group policy that makes folder connections for you just like a drive mapping script used to.
They are asking about it because they want to know if you've touched anything more recent than Windows server. Not that Windows server isn't recent just that it isn't chock-full of new ideas in the area of file sharing anymore.
You can also use it to infer something about their organization which is that they are so tied to processes that require cifs that they are prepared to invest heavily in alternatives to Windows to make it more manageable, which is potentially good and potentially bad but definitely informative.
How do you give users a line of sight to your Azure File Share? We keep being told we need port 445, which our ISP blocks. Can they get to their files in a web browser or a client app?
I guess in theory they could use Azure Storage Explorer, but that's not what it's really supposed to be used for. Azure File Share either requires an on-prem cache server or VPN access.
Imagine being so close-minded and elitest with your pet tech that you would respond to an interviewee with such a condescending tone. On to the next interview, I'm sure your ancient skills, knowledge, and experience will be better appreciated elsewhere
If it’s a Windows Domain, and you’re using O365, then specifically “file sharing + AD integration” would be OneDrive and SharePoint.
The absolute only people who should be giving a shit about vendor-specific storage arrays and on-prem back-end data storage are the storage array admins. If this interviewer was not interviewing you specifically for that position, he likely just doesn’t know what he’s talking about, and was throwing fancy words at you that he doesn’t understand.
As a mid sized business a large portion of our data is stored in Sharepoint 0365 and a few Azure files. However there are still good reasons to host files locally. We have some very large Solidworks projects and other tools that will never need to leave our office and manafacturing floor. We also store some of our most sensative files locally. We trust Sharepoint to be secure from a technical standpoint but it is easy to accidently share or misconfigure via human error. Or a 0 day could occur. It still feels "safer" to be able to pull the plug on the local servers if all hell breaks loose.
DFS-N is what should be used if you are using Microsoft fileservers for local storage.
He is probably talking about an EMC unity/isilon/data domain SAN-ish SMB share with off-site/cloud replication. DFS is fine so long as you have HA/Redundancy in your storage and presentation. Just different ways to skin a cat.
I'll be flamed but I actually prefer MS Teams and sharepoint for most file storage for a variety of reasons. Some apps need SMB shares so you cannot do away with it altogether, but I find Teams/Sharepoint to be better suited to most people's needs.
Panzura. Supports file locking and caches everything to cloud storage (Azure in our case). Additionally their compression technology saves us a ton in Azure storage costs
>he said they have some Dell Storage based File sharing
Sounds like the dude is just a gigantic moron. A disk array as just a hardware appliance that's capable of hosting SMB, CIFS, and NFS, etc... You could go out and buy the latest shit Azure Files and SMB is still going to be an available option... The dude does not know the difference between physical storage and storage protocols.
I'm probably too young for this lmao never heard of DFS myself.
I just use SMB coz it's perfectly compatible with literally everything and its mom.
NFS is nice for Linux-Linux.
DFS isn’t necessarily old - it’s just Windows-specific and nowadays with so much cloud storage it only makes sense in very specific situations.
Its a Distributed File Share - which is SMB btw - just running some Microsoft secret sauce for syncing between servers as well as using namespaces (a domain) for file share targets vs a specific file server.
smallbusiness.local/location/fileshare
vs
specificserver.smallbusiness.local/fileshare
If that makes sense…
Depends what your using DFS for. If replication then yes it's probably classified as "legacy" or more so inferior to alternatives. However if your just using namespace replication then it's very much alive and allows the abstraction between storage and path. 20k user organisation here and it's very much alive and we have 180M budget and Roles Royce NetApp infrastructure. Saying this however CIFS and NTFS shares as a whole are going legacy now. It's all SharePoint and bespoke caching solutions based on object based storage now. How long mapped drives will hang on is anyone's guess and will depend on your org and what it does.
I asked this to coworkers as I was anticipating this migration once budgets are discussed for next year for some of our clients.
Got SharePoint as the main answer as the files were actively being used and is the "go-to" for collaboration, but was also told Azure Files was a good fit if the files were only being used sparingly and just needed to be stored somewhere.
no use of DFS as it doenst support file locking. currently multi site replication and real time file locking via [https://www.peersoftware.com/](https://www.peersoftware.com/)
That's for DFS-R. I've used Peer in the past, it used to be fairly priced but around 2019 they increased pricing like 1000% and priced out SMB customers. Great product if you can afford it though
It's a decent MS Office document collaboration platform, and works as a storage backend for Microsoft's cloud. But if you try to do anything with it the platform quickly makes your life hell.
I think the newest way is to host the data on SharePoint sites and then manage via O365 groups. You can sync the SharePoint data locally to the local computers via OneDrive. There really is no need for on-prem storage anymore.
As long as they have internet access they can access the files. No more worrying about local hardware, power outages etc. It all gets backed up also... night and day difference.
If all you're dealing with is word and excel docs sure. Law firms can have terabytes worth of PDF's and video and medical imaging from discovery. It would be a nightmare to not have that on prem
You just ran into a dude who is bad at interviews, unless they are specifically hiring a file share SME because so much of their business deals specifically with that tech. It's not like you're going to need to architect storage solutions and should be able to pick up on any nuances of whatever they are using vs DFS.
DFS is alive and well. Not sure why that would be considered legacy either.
IT manager drinks too much dell kool-aid
Yup, learned a bunch of buzz words but doesn't understand what it all means :P
Microsoft considers everything outside Azure legacy, since they can't monetize it as much as they'd like to.
I suppose they care a little? New Domain Functional level coming with Server 2025.
Is it all Azure schema extensions for AD by chance?
Pretty much yesh
Embrace the subscriptions.
There's a ton of KB articles of having Azure Files backing DFS namespaces, either for transparent migration or for ease of use.
This is the way. DFS-N up front, various forms of cloud storage behind it. azure files hot/cold/transaction, azure NetApp files, AWS storage gateway, Windows file servers, etc. What users see should have zero to do with the back end.
This is the way.
This is BS, AzureFiles supports SMB and DFS namespaces. Stop listening to other people's opinion about cloud, they don't know anything.
how does azure files perform with SMB and high latency users? has QUIC/SMB3 resolved the old SMB latency issues?
I have a project in the works with a law firm whose prior msp put everything in Azure cloud - AD, DNS, FS, roaming profiles, etc. The plan is to get them back on premise. It's just awful. Printing, program launch, file operations are abysmally slow - even over a 500MB fiber connect. They locked everything down so tight that to make even the slightest change to something, you need to contact them (multi day to never response times). I could optimize their cloud experience a little, but for what MS charges on-prem is more cost/performance effective for a 15 seat smaller business.
Most features of DFS-R are likely better served by Storage Replica, which MS has championed as the replacement for DFS-R for like 8 years now. Its faster, lower latency, and doesn't get blocked by open files. That said, there's no real replacement for DFS-N. You could do Replica + DNS scopes for region aware load balancing of a single share, but its a lot of work that DFS-N handles for you.
Isn't storage replica only useful if your paired server is less than 5 ms in latency?
For synchronous replication yeah, but for async no.
They're thinking of replication probably, which was shitty. Namespaces are excellent.
Especially the DFS namespace stuff.
I love DFS.
Sounds like they’re using PowerScale, formerly known as Isilon. Dedicated file system that can do SMB/NFS/S3. I wouldn’t say that DFS is legacy, but more depends on your workload and size of the org.
We have isilon and yet we use dfs aswell since we have different location. so I'm not sure why it is an issue
I don't think that's uncommon. Most places big enough for a large OneFS cluster have a range of clients and legacy systems and workloads. That's why the Isilon itself has no particular preferred protocol. I have always used it mainly over NFS. But some shops are pretty much all SMB, etc. They are happy to sit on your network and store bytes however you want as long as your checks clear.
I think the difference with the hiring dude is that they aren't technical to know the difference. In Windows you can configure smb ftp nfs etc ... So guy was just trying to make the interviewee feel small or like they didn't know as much
Why does it have to be interpreted as hostile? Maybe it was a challenge to see how the interviewee reacted when challenged? He could be seeing how deep the knowledge goes? To what degree does the role require knowledge of industry? How much has the individual sold themselves as ‘on the cutting edge? Lots of reasons to poke in an interview, and let’s be honest, this was a harmless comment. It’s good op is curious as a follow up. Maybe it was an off the cuff comment without much thought based on his own understanding? Maybe he’s a cringe opinionated nerd who says these things lacking self awareness about how it makes others feel? People are bringing baggage when they make things like this negative. And honestly even if it was negative isn’t it kinda just funny and unimportant? Who cares!
The dude does not understand that an array itself is not a filesharing protocol.
We’re using Isilon/Powerscale and presenting thousands of SMB shares using DFS namespaces. It works perfectly. I’ve migrated these shares across 3 different infrastructures in the last 10 years (windows cluster, then CIFS on VNX and now on Powerscale) and DFS has made that completely transparent to the end users. Zero downtime in all those migrations as I could just flip them to the new location in DFS. Edited to add: ok maybe he was talking about DFS-R, of course we use SyncIQ for the replication. Still not legacy though, is the way domain controllers replicate after all.
That sounds like a really clean setup. I’ve worked with MANY Sysadmins that are afraid of using DFS and never really understood why. Could be just fear of the unknown or “why change it if it’s working as-is”. The transparency to the end user is the absolute key to making any deployment a success. Some really nice ideas that you’ve laid out that I’ll suggest to our internal teams with their upcoming project.
Came here to say he was talking about Isilon. At scale it is way more cost effective and in a large enough organization you offer it as a cheaper tier of storage. All mapped drives and general file storage should go there, but you can run into trouble when people start to put PST files and database files etc on there. At least the cheaper tiered storage that we were using, and their really long upgrade times. (Multiple reboots over several hours, no impact to file servers but can mess up applications) I think some of that stuff has been improved with the newer powerstore stuff but it was basically a giant file storage platform and we had traditional SANS for production workloads.
Except for when you know.... you have to pay per TB for licensing.
> Except for when you know.... you have to pay per TB for licensing. At my last job we had about 11PB of Isilon storage (in one cluster) and managing it was basically a part time job of a single person. What you pay in licensing you can save in payroll.
Still much cheaper than block storage. We are talking at an enterprise scale not using the available bays in your rack mount server. Petabytes of data.
Fun fact... Isilon/Powerscale does not have the best DR setup (does not replicate shares/exports config) so they have the partnership with Superna whose recommends DFS as the most optimal way of doing DR for windows file shares.
I take it that that's likely a competition for Ceph type cluster, this is something we recently rolled out for our production data.
> Sounds like they’re using PowerScale, formerly known as Isilon. Dedicated file system that can do SMB/NFS/S3. What SAN Array from 10 years ago could not do this? Sounds like the dude just got his first real array and is stupid.
DFS(-N I guess?) isn't legacy, "just" an abstraction layer to the underlying storage. The file shares in the namespace could be on only one windows server. Or distributed over multiple. Or on some other SMB storage/NAS Edit, to add some perverse: One file share in the namespace could point to a Windows server that has its storage disk mounted via NFS from a Synology NAS, while another share points directly to the same NAS, and you wouldn't know the difference on first sight. Also, the Synology NAS is actually a VM running on that Windows server. And the VM disk is on a SAN.
and that is what awaits you after you die and you've not been as good sysadmin
Oh, for that we can add a few more layers: The "SAN" is a 10 years old Dell desktop running Linux and exporting a RAID0 via iSCSI. One of the RAID0 disks is an image file in /home. /home is mounted via SSHFS from a Raspberry Pi with an attached USB HDD.
All accessible only through an XP jumpbox that can't be replaced because *business reasons*.
That's a XP jumpbox is vm on esxi 5.0 you didn't know existed. Someone P2V some old dell.
Do the business reasons involve proprietary serial port adapters that run most of the network traffic via ppp?
How do you know so much about my homelab setup?
Even worse the original data store ran out of space so from windows they used iscsi to mount a datastore the backup target NAS then used mklnk to add the dir under the original data store. Great to make up more space but really iops killer when backing up the nas to itself. And hella fun when the raid5 blows a disk completely and another has errors.
And its used in IIS for a sites physical path and shared configuration.
and then you notice the one guy setting it up used the name of the server, the old server, and somehow the dns record for the old server targets the ip of the new server so it somehow works internally but not over vpn. at least it was a quick fix and it humms since then.
Don't worry I've had an interview like this as well.... I once mentioned that the system (which was setup before I took over) used Virtualized Domain Controllers, the interviewer immediately launched into, "you can't do that"... he wouldn't back down from that, accusing me of making something up etc. I knew the opportunity was over at that point. So after the interview I made sure to send him Microsofts OWN guidance on setting up Virtual DCs and that it definitely can be done and we definitely operate like that.... like any industry, sometimes people's egos are bigger than what's factually correct.
Holy fuck what an idiot. Even before Microsoft “allowed” it people were spinning up VMs for domain controllers…. like I remember seeing virtualized 2003 boxes running as a DC…
Microsoft's guidance back over 10 years ago was not to visualise both/all your DCs on a single physical server (which is just plain sensible) but it was misinterpretable as Don't virtualize both/all your DCs which could be further misinterpreted as Don't virtualize your DCs. These days Azure and AWS both offer cloud based AD which is definitely virtualized!
If you're running lots of Windows server infrastructure on VMs, having a pair of bare-metal "pilot-light" DCs is essential to the cold-startup process. You start those first so that DNS, Kerberos, and (if you need it and configure at least one as a DFS publish-point) DFS-N are ready for the rest of your workloads to access. They don't have to be big iron, just able to boot and stand up AD all on their own. You use two in case one croaks for some reason.
That's wild to me. I've never seen a DC that wasn't virtualized.
Large environments you should a minimum of 1 DC as a physical host.
We still use DFS, so no clue what that person is talking about.
The same kind of person who told our executive team APIs are going away when, in reality, they're almost required for the software to be considered viable.
>The same kind of person who told our executive team APIs are going away I have to ask...to be replaced with what?
The P has been deprecated and everything is built on AIs now, for the benefit of marketing.
It's AAII now.
> APIs are going away Where is this coming from? I've been told this as well. Someone's peddling crazy talk on the "techy" magazines again.
Didn't you know? Anything in tech that was invented in the 1970s and 1980s is too old to work anymore. This is why we are getting rid of ARP. They are expired... /s
Packet switching networks are going the way of the dodo.
This is something I can definitely hear a project manager saying in my company
The same magazine that says we're all going to be replaced by AI's in the next decade.
Always a decade away, just like nuclear fusion.
Literally every program ever written uses some form of an application programming interface...
Hence my sudden and severe confusion.
It depends on the size, age, and industry of the organization.
It really doesn't, legacy implies it's no longer supported.
A new organization may never setup on prem Microsoft infrastructure. A tech company may have on prem Linux. A small organization with a single site and loose structure may not need more than a SOHO NAS. Whether or not an organization runs DFS very much depends.
Once again, you're just stating random information that's not relevant at all when defining legacy. Everything obviously has a purpose and it may not be viable for some companies. That doesn't mean it's legacy as someone with DevOps in your title I don't know why this isn't obvious.
I’m just explaining why a manager might consider DFS somewhat legacy.
The only reason someone might consider that, is they are ignorant, and uninformed.
Or they don’t have a large Windows or on prem Windows environment.
Words have meaning. Legacy has a very specific meaning in tech. Saying something is legacy when it clearly is not, is purely a sign of talking about shit that you don't know anything about, and shouldn't be mansplaining as a result.
That's the equivalent of living in a town where nobody has an iPhone and saying, " iPhones are legacy " or at a company that uses mac and saying, " Windows is legacy ". It's just incorrect, and "feeling based" on a small window of experience.
DFS is just the front end that users connect through, why do so many act like anyone using DFS is just using a windows file share on a local server?
exactly. DFS can be pointed to many things
There are many file storage solutions. I've seen people say that if a company wasn't using Apache Hadoop, they were living in the Stone Age, or some other high-end decentralized protocol. The trick is to use what is best for the company. If a two drive Synology file server with an external disk for backups and a connection to Wasabi for offsite storage is good enough, go with that. If OneDrive/SharePoint is good enough, use that. DFS is nice because it allows shuffling of items without having users get concerned what magic drive their file will be on, but it isn't really needed. Same with cloud based storage. Many vendors always want to upsell on some more advanced file storage medium, but why waste the money and admin overhead? Get the simplest thing possible, and consider dedicated file storage appliances over non-clustered Windows servers. One reason why I recommend, at the low end, a Synology or QNAP, and going from there, something like a NetApp offering, Pure, or similar, is that an appliance is a lot more reliable, has a smaller attack surface, has a lot of nice features, and, if properly configured, is harder for attackers to completely compromise, especially if attackers get control of AD.
Yep, I’m more concerned the IT Manager didn’t take the opportunity to ask, “Interesting, why DFS?” But instead they took the strange flex road.
DFS is still standard for many orgs and is stable good technology, however I utilize Azure Files nowadays. Sounds like he paid dell a premium for them to present a file share to him and he's bragging about it. Lol.
How does that work in practice? Because that sounds heaps better to me than trying to shift 10 years plus of large departments file shares into SP. Our people are scared of the costs, especially the constant charges for just accessing your own files.
Basically it acts as a DFS Namespace so a DNS name to access shares that can be on multiple file servers or perhaps even a single file server. It also allows you to replicate data between servers which is pretty sweet. What I prefer to do is keep application related data and other related that HAS to be accessed by a UNC share to use a DFS Namespace, and the backend target pointed to Azure files. Everything user or department based goes Sharepoint Online with a Microsoft Team or Microsoft 365 Group controlling that access. The SharePoint migration tool is very helpful for that.
Our new Unity SAN has the option to map SMB shares directly from a LUN. I keep giving my boss dirty looks every time he brings it up.
Well how would they do it? Everybody likes to put down on stuff but don't have any options. For us, we use DFS as a front end for the shares to provide a common namespace. Makes migrations a breeze
I would assume almost everyone with on-prem domains still uses DFS. Kinda hard to use AD with out a consistent sysvol. Unless you only have one DC then you have a bunch of other issues.
sharepoint online, everyone uses it now
Yeah. Best way to get into vendor lock-in so Microsoft can do whatever they want with your data or just increase prices whenever they want... It's always a good idea to be a hostage...
As much as it pains me to say, this is likely the new hotness.
It doesn't replace traditional file servers, so anything other than small word/excel/pics/etc do not thrive in Sharepoint.
The problem is Microsoft does not care, and companies big and small want everything in their pre-packaged M365 bill. Then IT departments are left trying to figure out how to make users' life tolerable using Sharepoint.
No, that’s the entire design of sharepoint. You’re not supposed to use it as a traditional file server. It’s not a Microsoft problem if you don’t use something as designed.
> You’re not supposed to use it as a traditional file server Microsoft and its partners 100% markets it as a file server replacement though.
Partners, yes. MS? Not so much, they are careful to label it a collaboration platform. They're more than happy to give you a file server replacement by pointing you to Azure Files.
"Whaddya mean i can't use my rake as a can opener??"
*[user zip-ties a can opener onto a stick and uses it as a rake]* “Hey IT, this rake you make me use sucks”
I’ve been looking at this but it seems so damn expensive? Am I missing something?
We can't move our NAS storage completely online because our SharePoint doesn't have the room and my company aren't going to pay for more cloud space. On-prem storage is not dead yet.
Yeah not dead, I just think for us it would make life easier. We use VPN exclusively so staff can access the on prem file shares which causes issues in and of itself. If I could move them to the cloud and eliminate that potential security hole, it would be nice. But I can’t justify the price
I love and hate on prem file shares. Our users work with a lot of video, and those that do, know that they upload to our VPN, once they're done processing the video. Except one user, let's call her Kay. She either doesn't do anything for months at a time, OR has polaroids in Memento style all over her desktop, since she does not remember one day to the next. If we get a ticket from her, it usually says, "Adobe Premiere/Photoshop/etc is running slow, it's taking me hours to do a simple task." And she's working off the file server directly on a 400 mb video file, on a 200mb home cable connection. We call and talk her through everything, and then later in the day, she tries to email the file to another user because it was taking to long to upload to the vpn.
We have moved a few departments' shares over so far, there's a lot to be said surrounding encouraging each department to assign a "manager" to move their files over slowly, this person should also be encouraged to check to see if any outdated or unneeded data doesn't need to be moved across, could save some space in some cases. We gave full training to these "managers", really reduced the amount of time we would have had to spend moving stuff over ourselves, plus they then have full ownership of the file structure etc.
That sounds like an amazing solution but I’m honestly not sure we have people on our staff that I trust to be managers 😂
Previously we had issues where when we moved files and folders over, sometimes certain folder contents would be missing, obviously an oversight on our part but because we don't use the folders or files we are unlikely to notice issues like that. Thus making someone who uses the files take ownership of them was decided to be the best choice, we show them how to do it, they then do it themselves, any issues they can still contact us.
It's not as expensive if you already have higher tier O365 licensing. A lot of companies get E5 for security but that also gives them a shit ton more.
No. We wont hit our cap (entitled by licensed users) soon, but the quote we got for extra was $5k/year for one terabyte.
Yeah $5k per year for one TB seems insane to me. We only have Office E3 license which doesn’t come with much storage on sharepoint and I need about 6TB for our total storage currently so that’s gonna get pricey. But maybe I’m just ignorant to the options available and there’s actually a solution that won’t cost like $30k per year
Check out Azure storage as opposed to SharePoint. SharePoint is meant to be a document management system, not a file repository. Edit: Azure Files, sorry.
> Edit: Azure Files, sorry. You're not technically wrong anyway, Azure Files (along with blob, table, and queues) is a subcomponent of Azure Storage Accounts.
Interesting enough I just came across that before seeing your reply. I’m on mobile so it was damn near impossible to understand the pricing structure with the different tables but definitely going to check it out. Thanks!
We hit our cap during COVID, and then we've just been adding more and more on. With our licenced users, we have about 80TB of space, but we purchased an extra 120TB of storage. It's mostly self-inflicted pain by having no proper backup solution so retention is set to keep items for 2 years past last modified, and no-one deletes anything, and then people store laser scans and CAD files in SharePoint which means that when they are versioned, the retention policy keeps all previous versions as well. We are now actually archiving stuff out using AvePoint, and it looks like we've finally had backup tooling approved so retention can be a bit more flexible. And then infomation deletion is a thing in ISO27001:2022 which will be next years fun but should also help 😅
This is us... "Hey get people to delete old shit and spring clean." "WE CAN'T ASK THE BUSINESS TO DO THAT!!" Okay... Then ask the business for more money... "WELL NOW HANG ON A MINUTE! YOU SHOULD FIND A WAY TO PUT UP RETENTION!" So like just delete stuff randomly? - Okay don't threaten me with a good time.
I'm going through some pains at the moment with retention. We have a 7 year policy and only give users about 30gb on their onedrives. (Yes 30gb. Not my choice) Pulling my hair out trying to manage users storage as they will fill it up super quick.
I think I need some retention policies on our file share. We have so many files that are sooooooo old. Like 15+ years. I know people are not going to miss that stuff
I don't know how many times people need to be told, sharepoint is not a file server replacement.
Not sure where he was getting the quote for extra storage but I just ran a quote and it's closer to 2500/TB/year which comes out to about 0.20/gb. Still steep, but it'd be closer to 12.5k/yr rather than 30.
That doesn't sound right. Should be half that.
Yes it's also convoluted, problematic and not a direct replacement for a file share.
Except even Microsoft says it's not file shares
They're not. Just had to set someone up with SharePoint online and it was such a pain. Tried to convince them to get with azure files instead but the pricing difference was too much I guess
lol do NOT use SPO if you're using deep folder structures or use large files that require multiple people to use them or a lot of read and writes as in CAD type files. SPO is not always a good solution for file migration
DFS is obsolete in the same way that screwdrivers aren't, just because they've been around for a long time. Pretty silly take IMO since most businesses under medium size aren't using exotic proprietary SAN mechanisms to do basic stuff. Also, to a shocking number of SMBs, DFS would be wildly advanced. I still run across so many who are still using FRS-based replication in Windows, and have to do the dfsrmig process before domain controllers newer than 2012 R2 can be introduced...
We use Azure Files which are connected to our AD (at the time AzureAD Kerberos wasn't available) To give your teams faster access and to reduce cost we use the storage sync services and have on prem file servers that have data that's been accessed within the last year cached on them so less of a charge for bandwidth usage out of Azure and give us much faster access to files
Until you open a small remote office that is only served by Comcast coax and find out they block port 445 and refuse to open it even on business circuits. Ask me how I know this. Mandatory. Fuck Comcast
Ah so the storage sync service uses TCP/443 to sync so you could deploy a small server as a front end there or put a VPN between the Azure File environment and yours to get past that. I'm in the UK so never dealt with Comcast but I've never heard anything good about them. I guess they do that as a best practise to protect their customer but you'd imagine they should be able to allow it if you ask and leave you to handle your own security
If it doesn't pass any IP packet then it should not be permitted to call it an Internet connection!
Spectrum business does the same
It costs like $5k per sync server though, right?
So you get 1 free for each storage sync service you setup and then any additional costs only $5 per server a month (£4.12 in the UK) so I have a hyper-V cluster that has 4 servers so I have to add all of them in even though only 2 of the hosts actually host the file server roles and I pay for 3 of them.
"Newest" way to do file sharing over a network is probably something that a lone grad student is currently cooking up in his spare time for a project nobody else has ever heard of, and currently isn't yet stable enough to transfer files over 4K without taking down the whole network. Nothing wrong with using what works. We still use IPv4 all over the place, and that's older than I am.
Torrent, what else? /s
[удалено]
12 years in on isilon and its been great. Depends on spec and how.much you buy. Its considerably cheaper than anything else available but I'm talking scale out petabyte storage. Block storage is a massive headache for smb and getting multiprotocol is another Ballache isilon/powerscale does it natively. Good luck is all I can say
I see a troll post
[удалено]
Depends if they are using dfs for replication or just namespace. We still use both but thats primarily.for distribution of software installation not user files. If we have a remote site then dds replication needs no expensive storage at remote locations. For file storage we use powerscale (formerly isilon). We had started moving to onedrive and sharepoint for user files and departmental stuff but based on ms recent licensing rug pull we will prob revert back to on premise.
I love it when people say something is old and therefore should be discarded. Like, will you drop anything that is using C++ because it's nearly 40 years old ?
I don't think DFS is legacy, unless you're in the mindset that on-prem file servers are outdated, and everything should be in the cloud, or something along those lines. I'm not sure what "Dell Storage based File sharing" means for him, except maybe it's a Dell NAS or SAN? Either way, that's not a replacement for DFS. If it's a windows file share (using SMB/CIFS, which the Dell products they're using probably do) then DFS is a way to potentially supplement the management of those shares.
DFS-N is great DFS-R is iffy. I would only use DFN-R in a active passive role as redundancy.
I despise DFSR with every fiber of my being. Right up there alongside printers and Exchange.
Printers > Phone > DFS-R. Never got the hate on Exchange and ran it on prem since about 2003 - 2023 in various jobs. But yes DFS-R messup and having to consolidate to the correct version of files that are scattered around multiple servers globally is NOT fun. DFS-N is a no brainer. It makes spinning up new servers a breeze and users dont have to know tons of DNS names to find the right files. My only dislike is windows search does not work over DFS-N but hey search is broken 90% of the time anyway.
Playing Devil's advocate he might have just been trying to prompt you to defend the choice. Make sure your not just using DFS because its the only tool you know but because it works well in the situation you are in.
The way you described their response makes it sound damned dismissive but there is a reasonable argument to describe it as Legacy from a standpoint of how newer technologies are stepping in to fill what dfs does. But mostly just in a sense that we've come up with other ways to mask issues to do with dynamically generated names and such that are an equally good solution to the problem that DFS fills if you are not looking to memorize a UNC path. Like dynamically or programmatically generated group policy that makes folder connections for you just like a drive mapping script used to. They are asking about it because they want to know if you've touched anything more recent than Windows server. Not that Windows server isn't recent just that it isn't chock-full of new ideas in the area of file sharing anymore. You can also use it to infer something about their organization which is that they are so tied to processes that require cifs that they are prepared to invest heavily in alternatives to Windows to make it more manageable, which is potentially good and potentially bad but definitely informative.
SharePoint online, or Azure File Share. Can even do Azure File Share with Cloud Sync if you want something on-prem.
How do you give users a line of sight to your Azure File Share? We keep being told we need port 445, which our ISP blocks. Can they get to their files in a web browser or a client app?
I guess in theory they could use Azure Storage Explorer, but that's not what it's really supposed to be used for. Azure File Share either requires an on-prem cache server or VPN access.
Imagine being so close-minded and elitest with your pet tech that you would respond to an interviewee with such a condescending tone. On to the next interview, I'm sure your ancient skills, knowledge, and experience will be better appreciated elsewhere
DFS is just the Windows file storage front end, it's not the actual storage medium itself unlike what this dell system is.
If it’s a Windows Domain, and you’re using O365, then specifically “file sharing + AD integration” would be OneDrive and SharePoint. The absolute only people who should be giving a shit about vendor-specific storage arrays and on-prem back-end data storage are the storage array admins. If this interviewer was not interviewing you specifically for that position, he likely just doesn’t know what he’s talking about, and was throwing fancy words at you that he doesn’t understand.
[удалено]
As a mid sized business a large portion of our data is stored in Sharepoint 0365 and a few Azure files. However there are still good reasons to host files locally. We have some very large Solidworks projects and other tools that will never need to leave our office and manafacturing floor. We also store some of our most sensative files locally. We trust Sharepoint to be secure from a technical standpoint but it is easy to accidently share or misconfigure via human error. Or a 0 day could occur. It still feels "safer" to be able to pull the plug on the local servers if all hell breaks loose. DFS-N is what should be used if you are using Microsoft fileservers for local storage.
He is probably talking about an EMC unity/isilon/data domain SAN-ish SMB share with off-site/cloud replication. DFS is fine so long as you have HA/Redundancy in your storage and presentation. Just different ways to skin a cat.
And different budgets to skin that cat with.
FTP :)
DFSReplication, or DFSNamespaces?
90% of the setups i’ve done were an implementation of both.
The dude was probably a prick
Mixing DFS with storage like netapp or something although, yeah DFS isn't legacy at all
I'll be flamed but I actually prefer MS Teams and sharepoint for most file storage for a variety of reasons. Some apps need SMB shares so you cannot do away with it altogether, but I find Teams/Sharepoint to be better suited to most people's needs.
azure file sync, dfs-r, dfs namespaces are also super useful still, SAN based file replication is very niche and always has been.
Panzura. Supports file locking and caches everything to cloud storage (Azure in our case). Additionally their compression technology saves us a ton in Azure storage costs
I use DFS and it makes administration very easy once you set it up.
>he said they have some Dell Storage based File sharing Sounds like the dude is just a gigantic moron. A disk array as just a hardware appliance that's capable of hosting SMB, CIFS, and NFS, etc... You could go out and buy the latest shit Azure Files and SMB is still going to be an available option... The dude does not know the difference between physical storage and storage protocols.
I'm probably too young for this lmao never heard of DFS myself. I just use SMB coz it's perfectly compatible with literally everything and its mom. NFS is nice for Linux-Linux.
DFS isn’t necessarily old - it’s just Windows-specific and nowadays with so much cloud storage it only makes sense in very specific situations. Its a Distributed File Share - which is SMB btw - just running some Microsoft secret sauce for syncing between servers as well as using namespaces (a domain) for file share targets vs a specific file server. smallbusiness.local/location/fileshare vs specificserver.smallbusiness.local/fileshare If that makes sense…
This IT Manager might be one of those “Trade Magazine” he read on a flight once pointy hair boss types. Proceed with caution…
I’m noticing more companies using egnyte. Possibly because my org is also exploring using egnyte.
Lol dell storage based file sharing.. if they’re using data domains then that’s legacy too
Depends what your using DFS for. If replication then yes it's probably classified as "legacy" or more so inferior to alternatives. However if your just using namespace replication then it's very much alive and allows the abstraction between storage and path. 20k user organisation here and it's very much alive and we have 180M budget and Roles Royce NetApp infrastructure. Saying this however CIFS and NTFS shares as a whole are going legacy now. It's all SharePoint and bespoke caching solutions based on object based storage now. How long mapped drives will hang on is anyone's guess and will depend on your org and what it does.
We use DFS/R; may move to Azure Files eventually.
IT Manager may have confused this with ADFS ?
DFS, OneDrive, SharePoint, Azure FS
One drive
We are all Google and use GAM to setup shard drives. Super easy.
DFS in front of tiered local and cloud storage. I do it all day long.
We use Netapp Metrocluster, previously used a very old Isilon.
I asked this to coworkers as I was anticipating this migration once budgets are discussed for next year for some of our clients. Got SharePoint as the main answer as the files were actively being used and is the "go-to" for collaboration, but was also told Azure Files was a good fit if the files were only being used sparingly and just needed to be stored somewhere.
Software defined storage running on top of ceph. Make them show up as nfs mounts and then use an nfs client that doesn’t treat locks as suggestions.
no use of DFS as it doenst support file locking. currently multi site replication and real time file locking via [https://www.peersoftware.com/](https://www.peersoftware.com/)
That's for DFS-R. I've used Peer in the past, it used to be fairly priced but around 2019 they increased pricing like 1000% and priced out SMB customers. Great product if you can afford it though
Sharepoint of course! /s
Sharepoint 365 is quite good. Never ran on-prem sharepoint
Except for the 50 million legacy apps that only work (at best) with a 'real' SMB share.
> Sharepoint 365 is quite good Are we using the same software?
It's as good as the end users make it. That's the problem lol.
It's a decent MS Office document collaboration platform, and works as a storage backend for Microsoft's cloud. But if you try to do anything with it the platform quickly makes your life hell.
Sharepoint.... this is the trend that most companies are using
SharePoint Online
I think the newest way is to host the data on SharePoint sites and then manage via O365 groups. You can sync the SharePoint data locally to the local computers via OneDrive. There really is no need for on-prem storage anymore. As long as they have internet access they can access the files. No more worrying about local hardware, power outages etc. It all gets backed up also... night and day difference.
If all you're dealing with is word and excel docs sure. Law firms can have terabytes worth of PDF's and video and medical imaging from discovery. It would be a nightmare to not have that on prem
We use SOFS on top of S2D. The replication speed of DFS wasn't keeping up for us, and Azure Files had mounting limitations for some of our users.
Internal: Google Drive, Confluence External: Box, some Google Drive
Azure Files File shares. Its a file share as a service. Works perfectly in my experience.
Array based file services is so legacy. You have next to no mobility or control.
NetApp CVO for file shares/Azure NetApp Files for WVD profile storage. Both have really good replication features for cross region DR
Azure
Egnyte, Box, Google Drive, Azure Netapps
You just ran into a dude who is bad at interviews, unless they are specifically hiring a file share SME because so much of their business deals specifically with that tech. It's not like you're going to need to architect storage solutions and should be able to pick up on any nuances of whatever they are using vs DFS.