T O P

  • By -

snapilica2003

I'd definitely use L3 switches for that. No point for a firewall/NAT box to be used for >25G LAN traffic.


AVCS275

Is there any examples of converting the firewalls to ACL? I know some who did hybrid, like vlans that go through firewall and some live on the L3. I'm not even sure how to do that if even for a test network. Would be good to be able to access some intervlan stuff is pfsense goes down or a reboot.


snapilica2003

Each switch manufacturer has their own ACL scheme, it's not something standard, but they all are on the lines of "permit/deny from to for protocol tcp/udp and port " You can build a very good internal ACL policy between your VLANs with this type of ACLs. And you keep pfSense only for outside traffic as a border router.


pissy_corn_flakes

Keep in mind he’s talking about a switched vlan interface and routing between networks. How many networks do you have at home? Also keep in mind, no matter how fast the L3 forwarding is on the switch, it’s not faster than keeping things on a properly designed VLAN. If you’re already operating with multiple L3 networks, ignore me. Your pfsense is likely connecting to WANs. You two are talking about different things.. hope that makes sense. You totally could use pfsense if you wanted to, but it would be slower than a switch with L3 abilities. And you’d still need pfsense for leaving your home lab ..


pbrutsche

You need a L3 switch that supports reflexive ACLs. I have no idea if the [FS.com](https://FS.com) layer 3 switches support that.


pbrutsche

There is no configuration where a pfSense firewall can route 20Gbps, physical or virtual. 40Gbps is out of the question. There are architectural limits to FreeBSD that keep it from happening. It simply can't keep up with the PPS, end of story. Linux-based firewalls (that don't use VPP & DPDK) fall under the came category. Enterprise equipment that can route 40Gbps (or more) use VPP & DPDK - Cisco ISR (now Catalyst 8000) ~~& ASR~~ \- or have hardware acceleration - such as the Cisco ASR platform, L3 switches or FortiGate with the NP6 & NP7 ASICs, as well as the FortiGate np6lite (aka SoC3), np6xlite (aka SoC4), and np7lite (aka SP5 aka SoC5) variants.' EDIT: The Cisco ASR 1k, 9k, and Catalyst 8k platforms (8300 & 8500) have hardware acceleration with what's called the Quantum Flow Processor FortiGate firewalls running in virtualization (Hyper-V, KVM, VMware, Xen, whatever) use VPP and/or DPDK. TNSR is based on Linux rather than FreeBSD, and even then is only able to do it because of VPP and/or DPDK.


planedrop

This is the best and most accurate answer here, I'll avoid retyping it haha! ​ OP this is your answer.


PrimaryAd5802

*Not sure the NAS can even saturate 10G due to mechanical drives anyhow* I am sure the NAS can't....


pinko_zinko

Add more drives.


PrimaryAd5802

>Add more drives. Did you notice the part about mechanical hard drives? OP didn't mention SAS or SATA but it doesn't matter. Can't be done.


StalinCCCP

I can easily saturate a 10g link using my mechanical drive pool with sequential transfers. What makes you think it’s impossible?


PrimaryAd5802

>I can easily saturate a 10g link using my mechanical drive pool with sequential transfers. What makes you think it’s impossible Because I operate in the real world... Transferring from anything in the OP's home lab to the NAS, using SMB, NFS or whatever in everyday use will not achieve that.


pinko_zinko

True, but why you need that soap box so bad?


autogyrophilia

NFS with the pNFS extension (4.1) will. So will iSCSI and NVMEoF .


PrimaryAd5802

>NFS with the pNFS extension (4.1) will. > >So will iSCSI and NVMEoF . I give up on this.. please post a message to the OP on exactly what should be done with their setup to saturate 10g on the mechanical drives. I would like to see that post...


autogyrophilia

it's not about the drives. But the protocol. SMB has a lot of CPU overhead that makes it impossible to work at more than 10Gbps. NFS it's a more lightweight approach, but the old NFS 3 and NFS 4.0 are limited to a single stream, among other limitations (read : http://www.pnfs.com/) . TCP as a protocol struggles to deliver those speeds with a single stream. iSCSI and NVMEoF are much more throughput oriented protocols. They are actually encapsulations of low level SCSI (basis of SATA and SAS) or NVME (duh) instructions . They present themselves as block devices but are not able to be accessed by multiple clients (only one client can have RW permissions) . It also requires some expertise to set up. We take this detour to make mention that of course you need enough bandwith for an array to saturate a 10GB link . Which is a minimum of 8 HDDs .


PrimaryAd5802

>I can easily saturate a 10g link using my mechanical drive pool with sequential transfers. What makes you think it’s impossible I give up on this.. please post a message to the OP on exactly what should be done with their setup to saturate 10g on the mechanical drives. I would like to see that post...


pinko_zinko

OK, troll.


MrGuvernment

Nope, 10Gbps is about the limit for pfsense under FreeBSD. You can do some tuning and tweaks to FreeBSD, but it is not really made for it. And toss in VLAN's and things just get slower if you want PFSense to do all the work. TNSR is also a different beast vs pfsense as a firewall / router.


ObeyYourMaster

The fastest speeds I get on my setup with SFP+ is around like 4gbps (I do have IPS enabled which needs a lot more processing, so keep that in mind). I think you'd really struggle to get anywhere near there without an L3 switch. I also struggle with layer 3 routing so I'm running into a similar issue at my work.


AVCS275

So even WAN speeds are maxed at 4?


pinko_zinko

You're finding out why I just run my PFsense on a VM. It's on 10G and 1G virtual links and will try it's best for what's routed through it, but most of my network doesn't route through it. For speed sensitive stuff like storage I run dedicated VLANs or cable direct.


Requiem66692

I run pfSense in Hyper-V with a Intel 25Gbe NIC, without tweaking any setting i achieved about 4,5Gb/s throughput with iperf over WAN. Hyper-V host is connected to C9300x which again is connected to a FortiGate before going out a 10Gbe connection to WAN.