Skip to main content
AI Assist is now on Stack Overflow. Start a chat to get instant answers from across the network. Sign up to save and share your chats.
edited tags
Link
Erwin Brandstetter
  • 668.6k
  • 160
  • 1.2k
  • 1.3k
format query, trim noise
Source Link
Erwin Brandstetter
  • 668.6k
  • 160
  • 1.2k
  • 1.3k

I'm trying to come up with a PostgreSQL schema for host data that's currently in an LDAP store. Part of that data is the list of hostnames a machine can have, and that attribute is generally the key that most people use to find the host records.

One thing I'd like to get out of moving this data to an RDBMS is the ability to set a uniqueness constraint on the hostname column so that duplicate hostnames can't be assigned. This would be easy if hosts could only have one name, but since they can have more than one it's more complicated.

I realize that the fully-normalized way to do this would be to have a hostnames table with a foreign key pointing back to the hosts table, but I'd like to avoid having everybody need to do joins for even the simplest query:

select hostnames.name,hosts.*  from hostnames,hosts   where hostnames.name = 'foobar'  and hostnames.host_id = hosts.id; 

I figured using PostgreSQL arrays could work for this, and they certainly make the simple queries simple:

select * from hosts where names @> '{foobar}'; 

When I set a uniqueness constraint on the hostnames attribute, though, it of course treats the entire list of names as the unique value instead of each name. Is there a way to make each name unique across every row instead?

If not, does anyone know of another data-modeling approach that would make more sense?

Thanks!

I'm trying to come up with a PostgreSQL schema for host data that's currently in an LDAP store. Part of that data is the list of hostnames a machine can have, and that attribute is generally the key that most people use to find the host records.

One thing I'd like to get out of moving this data to an RDBMS is the ability to set a uniqueness constraint on the hostname column so that duplicate hostnames can't be assigned. This would be easy if hosts could only have one name, but since they can have more than one it's more complicated.

I realize that the fully-normalized way to do this would be to have a hostnames table with a foreign key pointing back to the hosts table, but I'd like to avoid having everybody need to do joins for even the simplest query:

select hostnames.name,hosts.* from hostnames,hosts where hostnames.name = 'foobar' and hostnames.host_id = hosts.id; 

I figured using PostgreSQL arrays could work for this, and they certainly make the simple queries simple:

select * from hosts where names @> '{foobar}'; 

When I set a uniqueness constraint on the hostnames attribute, though, it of course treats the entire list of names as the unique value instead of each name. Is there a way to make each name unique across every row instead?

If not, does anyone know of another data-modeling approach that would make more sense?

Thanks!

I'm trying to come up with a PostgreSQL schema for host data that's currently in an LDAP store. Part of that data is the list of hostnames a machine can have, and that attribute is generally the key that most people use to find the host records.

One thing I'd like to get out of moving this data to an RDBMS is the ability to set a uniqueness constraint on the hostname column so that duplicate hostnames can't be assigned. This would be easy if hosts could only have one name, but since they can have more than one it's more complicated.

I realize that the fully-normalized way to do this would be to have a hostnames table with a foreign key pointing back to the hosts table, but I'd like to avoid having everybody need to do joins for even the simplest query:

select hostnames.name,hosts.*  from hostnames,hosts   where hostnames.name = 'foobar'  and hostnames.host_id = hosts.id; 

I figured using PostgreSQL arrays could work for this, and they certainly make the simple queries simple:

select * from hosts where names @> '{foobar}'; 

When I set a uniqueness constraint on the hostnames attribute, though, it of course treats the entire list of names as the unique value instead of each name. Is there a way to make each name unique across every row instead?

If not, does anyone know of another data-modeling approach that would make more sense?

Source Link
Lars Damerow
  • 373
  • 1
  • 3
  • 4

Can PostgreSQL have a uniqueness constraint on array elements?

I'm trying to come up with a PostgreSQL schema for host data that's currently in an LDAP store. Part of that data is the list of hostnames a machine can have, and that attribute is generally the key that most people use to find the host records.

One thing I'd like to get out of moving this data to an RDBMS is the ability to set a uniqueness constraint on the hostname column so that duplicate hostnames can't be assigned. This would be easy if hosts could only have one name, but since they can have more than one it's more complicated.

I realize that the fully-normalized way to do this would be to have a hostnames table with a foreign key pointing back to the hosts table, but I'd like to avoid having everybody need to do joins for even the simplest query:

select hostnames.name,hosts.* from hostnames,hosts where hostnames.name = 'foobar' and hostnames.host_id = hosts.id; 

I figured using PostgreSQL arrays could work for this, and they certainly make the simple queries simple:

select * from hosts where names @> '{foobar}'; 

When I set a uniqueness constraint on the hostnames attribute, though, it of course treats the entire list of names as the unique value instead of each name. Is there a way to make each name unique across every row instead?

If not, does anyone know of another data-modeling approach that would make more sense?

Thanks!