I'm trying to get postgis's regresssion tests to pass, and the problem is that it's trying to set UTF-8 encoding and load a string that is 32 characters of 8859-1, but 35 characters in UTF-8, into a varchar(32). My database default is SQL_ASCII because I don't have a locale set, and thus no conversion happens. So, the bug is that postgis's regression tests shouldn't assume this, and should create the test database with a specified encoding. So with pgsql 9.1.2 on NetBSD/i386 5.1_STABLE (very recent): gdt 76 ~ > locale -a |egrep en_US en_US.ISO8859-1 en_US.ISO8859-15 en_US.US-ASCII en_US.UTF-8 gdt 77 ~ > createdb --locale=en_US.ISO8859-1 foo createdb: database creation failed: ERROR: invalid locale name en_US.ISO8859-1 Then, I tried: gdt 80 ~ > createdb --encoding=UTF-8 foo createdb: database creation failed: ERROR: new encoding (UTF8) is incompatible with the encoding of the template database (SQL_ASCII) HINT: Use the same encoding as in the template database, or use template0 as template. gdt 82 ~ > createdb --encoding=ISO8859-1 foo createdb: database creation failed: ERROR: new encoding (LATIN1) is incompatible with the encoding of the template database (SQL_ASCII) HINT: Use the same encoding as in the template database, or use template0 as template. So, how to create a database with a different locale, or at least an encoding that's transformable from UTF-8 where only ISO8859-1 is used? Am I missing something basic, or is our pgsql locale support off, or ?? (If I set LC_ALL to en_US.ISO8859-1 when creating the very first database, then all is ok. So maybe we should do that in the scripts if no locale is set.)
Attachment:
pgpYPVUTHGn0d.pgp
Description: PGP signature