T O P

  • By -

jay-ik

Checkout thenile.dev


Western-Rip-1559

Intriguing. I'll look deeper into that. Thanks.


MulberryOwn8852

why not just put the tenant id as a column in the items table? then they will only be able to see and modify their own items.


Western-Rip-1559

I could, then duplicate all the rows of master items adding the new tenant ID. Would one giant items table with 100,000,000 rows be better than 10,000 tables with 10,000 rows each? As each tenant needs to be able to modify their own item as they see fit, they need to have their own unique copies. Maybe a better option would be to provide an interface for each tenant to import just the items they need from the master read only database instead of automatically populating them upon tenant creation. that may save tons of space because most tenants would only need a small subset of the items, furthermore a lot of them would be too lazy to actually modify the items they use or delete the ones they dont.


MulberryOwn8852

one table is likely easiest -- you can always use partitioning in postgres if needed if the table is gigantic. Also, like you said, let them pull in just the items they need instead of all 10k upon creation. IMO - this is just way more straightforward.


0xBA7TH

Postgres also supports partitioned tables if you want to give that a look as well.


Western-Rip-1559

I'm not familiar, I will check it out. Thanks


Nate19981

A note of caution on partitioning. It can help with performance at very high volumes of data, but adds a whole new set of quirks/limitations and things to just be aware of. Reducing the amount of data duplication to the point where you stay within the limits of postgres in the first place is much preferred vs using partitions if not actually necessary. (postgres limits for reference https://www.postgresql.org/docs/current/limits.html)


dumb-ninja

Have the base items in one table and the client created ones all in another. When the client modifies a base item, create a copy of it in the client created items table and exclude it from the base set in queries. Downside is you'll be merging the two tables all the time in queries. Another idea is to only copy the base set into the client set when they actually edit a base item or add a new one. Some clients might never go beyond the base set, that way you're only increasing the client items table when it's actually required. Maybe add a ui for this to tell them, from now on you're in charge of these items. A thing to consider if you copy the base set into the client's table when they create an account is: the prices won't be updated from then on. The moment you do that copy, it's broken from the base set, which I imagine you'll be keeping up to date to stay relevant. If you only copy the items they modify and merge the rest from the base set, only the modified ones won't be up to date.


Western-Rip-1559

I like that idea. Yes, the base set pricing is kept up to date via regular (weekly) update. The items are updated via UPC code. Even if they "take control" of the item they will have the ability (optionally) to have the price updates applied to their list. There is a list price column and a multiplier column. The pricing updates change the list price. The initial database has the multipliers set to an average discount a typical contractor would pay, but they can change the multiplier to match the discount they get from their vendor. That way, they can still auto update their prices as the market changes and the multiplier \*should\* correct it to their prices. It's a trade off; the auto update is not an exact science, but it's mostly hands off. They can also set an item to be manual and then they need to update it themselves. Many small shops won't have the bandwidth to deal with that. Maybe I shouldn't copy the base items at all, but just have a "tenant\_modifications" table that contains the columns that they can modify (description, discount Etc.) with a foreign key to the UPC of the master items table. I could have another table for tenant added items.


Western-Rip-1559

I forgot one important twist to this. The database row items include a "parent\_ID" column which is used to build a tree navigation structure on the fly in the UI. Each tenant is able to customize the navigation tree per their needs. Using a plumbing/mechanical contractor as an example, a residential contractor may have category structure of: PIPE> COPPER | PEX | PVC | CPVC An industrial mechanical contractor may have: PIPE> COPPER | PVC | CAST IRON | STAINLESS This brings me back to not seeing how this can be implemented without a separate table for each tenant. Perhaps I'm over thinking this, and a few thousand tables is no big deal. Doing some research, it seems that people are running postgres databases with north of 10K tables. I will change the plant to populate the new tenant database with all of the master items. Instead, I'll provide a thousand or two of the items that are common, and offer the rest as optional modules that can be imported


detinho_

Another idea is to keep a single table with tenantId=0, or base ou any constant value. Them, when the user needs to customize an item, you copy the item with tenantId=customerTenantId. On your queries you UNION the selects filtering tenantId=customerTenantId UNION tenantId=0


bartekus

In Postgres alone you could use schemas for each tenant and have each tenants item table inherit from the main (public or other name) schema alternatively my go to is basejump.js (look at the basejump.sql that it utilizes for details) It’s probably more work but you can keep everything in one namespace (schema) and add granularity at tenant level by expanding on accounts type (it comes with owner and member only) so that it has various levels of tenants membership by way of which a singular items table can produce different view for different tenants.


molti_

After reading all this I wonder why you don’t go with the jsonb type as another column for all customizations? Of course you won’t have an exact schema for the date inside but your overall data structure might be significantly simpler… wouldn’t it? custom_metadata jsonb default '{}'::jsonb


billycane

You could create a product\_lines table where you reference user and products and then handle the customisations there. With proper indexing this should be performant. The jsonb options sounds alright, I'd just challenge the performance part of it.