+Arrays are a first class datatype within postgresql. The contents can only be of a single
+datatype. Postgresql will enforce that typing. They can be multidimensional. The starting index is 1, not 0. Regardless of whether you specify an array length when you create a table,
+Postgresql will always treat them as variable length.
+
+
Postmodern/s-sql can be used to insert common lisp arrays into postgresql databases,
pull postgresql database arrays out of databases into a common lisp array,
@@ -318,6 +318,10 @@
+The Postmodern dao-classes can also have slots that are common lisp arrays with the same utility.
+
+
This page will go into more detail on how to use the available operators and functions
in s-sql.
@@ -325,19 +329,16 @@
Summary<
-
-
Use cases for arrays in a database
-
+
+
Use cases for arrays in a database
+
-
-
General Usage
-
+
+
General Usage
+
-Arrays are a first class datatype within postgresql. The contents can only be of a single
-datatype and postgresql will enforce that typing. You can either use arrays as a datatype
-stored in the database or there may be reasons why you want to use them as an intermediate
-datatype in a query.
+You can either use arrays as a datatype stored in the database or there may be reasons why you want to use them as an intermediate datatype in a query.
@@ -392,9 +393,9 @@
General
one-dimensional array and the schedule is a two dimensional array.
-
CREATETABLEsal_emp (
- name text,
- pay_by_quarter integer[],
+
CREATETABLEposts (
title TEXT,
tags TEXT[]
);
--- Select all posts with tag 'kitty'
-SELECT * FROM posts WHERE tags @> '{kitty}';
+-- Select all posts with tag 'kitty'
+SELECT * FROM posts WHERE tags @> '{kitty}';
Do not use arrays where you need to maintain integrity for foreign relationships. That is what
@@ -556,24 +557,20 @@
Rules of
-
-
-
Data Type Enforcement
-
+
+
Data Type Enforcement
+
-Compared to jsonb, postgresql arrays allow you to enforce
-the data type. This can be critical in both maintaining
-the integrity of your data as well as optimization
-in your appliction code. This database enforeced type
-safety does not, however, enforce the dimensionality
-of the array.
+Compared to jsonb, postgresql arrays enforce the data type. This can be critical in both maintaining the integrity of your data as well as optimization in your appliction code.
+This database enforced type safety does not, however, enforce the dimensionality of
+the array.
-
-
Indices on Arrays
-
+
+
Indices on Arrays
+
It is highly recommended that you use GIN or GIST indexes to search
for items in array column. You should remember that GIST indices are
@@ -584,9 +581,9 @@
Indices
-
-
S-SQL Array Support
-
+
+
S-SQL Array Support
+
S-sql can feel a little messy with respect to arrays but that
is in large part because (a) sql dealing with arrays is messy and
@@ -605,56 +602,56 @@
S-SQL Ar
-
-
:array (used inside a query calling a subquery, selecting into an array)
-
+
+
:array (used inside a query calling a subquery, selecting into an array)
Just to make these usage examples really simple, we will use the
simplest use case version discussed above, with a tags array in a table
with the name of the item. In this case the name is the name of a
-receipe and the tags are ingredients that either go in the receipe
-or accompany the receipe.
+recipe and the tags are ingredients that either go in the recipe
+or accompany the recipe.
This will automatically insert the required square brackets into the sql statement
being passed to postgresql. This automatic translation between lisp and
@@ -757,14 +753,14 @@
General
Updating the array can be done either explicitly:
-
;;; Update array with an lisp array (changing onion to onions in the one row where it is singular
-(query (:update 'receipes
- :set 'tags #("raw meat""bulgur""onions""spices""pita bread")
- :where (:= 'name "Kibbe nayeh")))
+
;;; Update array with an lisp array (changing onion to onions in the one row where it is singular
+(query (:update 'recipes
+ :set 'tags #("raw meat""bulgur""onions""spices""pita bread")
+ :where (:= 'name "Kibbe nayeh")))
or passing in a lisp variable:
-
;;; checking passing a lisp array as a variable
-(let ((lisp-arry #("wine""garlic""soy sauce")))
- (query (:update 'receipes
- :set 'tags '$1
- :where (:= 'name 11))
+
;;; checking passing a lisp array as a variable
+(let ((lisp-arry #("wine""garlic""soy sauce")))
+ (query (:update 'recipes
+ :set 'tags '$1
+ :where (:= 'name 11))
lisp-arry))
-SELECT r.rolname,
- ARRAY(SELECT b.rolname
- FROM pg_catalog.pg_auth_members m
- JOIN pg_catalog.pg_roles b ON (m.roleid = b.oid)
- WHEREm.member = r.oid) as memberof
-FROM pg_catalog.pg_roles r
-ORDERBY 1;
+SELECT r.rolname,
+ ARRAY(SELECT b.rolname
+ FROM pg_catalog.pg_auth_members m
+ JOIN pg_catalog.pg_roles b ON (m.roleid = b.oid)
+ WHEREm.member = r.oid) as memberof
+FROM pg_catalog.pg_roles r
+ORDERBY 1;
**************************
(sql (:select '*
+ :from 'recipes
+ :where (:="chicken" (:any* 'tags ))))
-"(SELECT * FROM receipes WHERE (E'chicken' = ANY(tags)))"
+"(SELECT * FROM recipes WHERE (E'chicken' = ANY(tags)))"
-(sql (:select '*
- :from 'receipes
- :where (:="chicken" (:any 'tags ))))
+(sql (:select '*
+ :from 'recipes
+ :where (:="chicken" (:any 'tags ))))
-"(SELECT * FROM receipes WHERE (E'chicken' = ANY tags))"
+"(SELECT * FROM recipes WHERE (E'chicken' = ANY tags))"
In the following two cases we want to use ':any*'. In the first simple query,
-we are looking for everything in the rows where the name of the receipe is in
+we are looking for everything in the rows where the name of the recipe is in
the lisp array we passed in.
Per postgresql documentation array comparisons compare the array contents
element-by-element, using the default B-tree comparison function for the
@@ -1176,136 +1177,136 @@
Array Co
Form is (:operator array1 array2)
-
-
:= Equality Comparison (Are two arrays equal on an element by element basis)
-
+
+
:= Equality Comparison (Are two arrays equal on an element by element basis)
Form: (:array-cat (array1 array2))
Concatenates two arrays. No more, no less. Both arrays need to have the same
data type. They do not need to be the same length.
Form: (:array-position (array element starting-point-if-not-one))
Array-position returns the subscript of the first occurrence of the
@@ -1541,8 +1542,8 @@
array-po
Requires postgresql version 9.5 or newer.
Form: (:array-replace (array element-to-be-replaced element-used-as-replacement))
Array-replaces replaces each array element equal to the given value
with a new value. Requires postgresql 9.3 or newer.
Form: (:string-to-array (text delimiter optional-null-string))
String-to-array splits a string into array elements using
the supplied delimiter and optional null string.
An empty array will be returned by postmodern as nil.
diff --git a/doc/array-notes.org b/doc/array-notes.org
index 3208b81..ca21d25 100644
--- a/doc/array-notes.org
+++ b/doc/array-notes.org
@@ -5,32 +5,38 @@
#+OPTIONS: ^:nil
[[file:s-sql.html][Return to s-sql.html]]
+[[file:dao-classes.html][Return to dao-classes.html]]
+[[file:postmodern.html][Return to postmodern.html]]
+
* Summary
:PROPERTIES:
- :ID: 25ed4f42-9f8d-43e0-93df-593e8af1d200
+ :ID: array-summary
:END:
+Arrays are a first class datatype within postgresql. The contents can only be of a single
+datatype. Postgresql will enforce that typing. They can be multidimensional. The starting index is 1, not 0. Regardless of whether you specify an array length when you create a table,
+Postgresql will always treat them as variable length.
+
Postmodern/s-sql can be used to insert common lisp arrays into postgresql databases,
pull postgresql database arrays out of databases into a common lisp array,
and generally engage in all the ways that sql can use postgresql arrays.
Postgresql arrays are documented at https://www.postgresql.org/docs/current/static/arrays.html
and https://www.postgresql.org/docs/current/static/functions-array.html.
+The Postmodern dao-classes can also have slots that are common lisp arrays with the same utility.
+
This page will go into more detail on how to use the available operators and functions
in s-sql.
* Use cases for arrays in a database
:PROPERTIES:
- :ID: 9d616ea8-b581-4d78-b83f-2a1bb550c5a7
+ :ID: array-use-cases
:END:
** General Usage
:PROPERTIES:
- :ID: e5fa6a9b-8773-473a-aa9c-7bdaf3aa203e
+ :ID: general-array-usage
:END:
-Arrays are a first class datatype within postgresql. The contents can only be of a single
-datatype and postgresql will enforce that typing. You can either use arrays as a datatype
-stored in the database or there may be reasons why you want to use them as an intermediate
-datatype in a query.
+You can either use arrays as a datatype stored in the database or there may be reasons why you want to use them as an intermediate datatype in a query.
There is a bit of controversy over the use of the array datatype in a database. There are those
who adamantly oppose it, claiming that it is a violation of 1NF form (normalization).
@@ -180,7 +186,7 @@ See also [[https://www.compose.com/articles/take-a-dip-into-postgresql-arrays/]]
** Rules of Thumb - Do Not Use Arrays If:
:PROPERTIES:
- :ID: 1022eae8-7780-44e5-874e-207df82a4bea
+ :ID: do-not-use-arrays-if
:END:
- Do not use arrays where you need to maintain integrity for foreign relationships. That is what
@@ -192,21 +198,17 @@ foreign keys are for.
that the ORM can utilize arrays.
-
** Data Type Enforcement
:PROPERTIES:
- :ID: 58701df2-c03f-4326-9cc9-efa7f2fa4f5b
+ :ID: data-type-enforcement
:END:
-Compared to jsonb, postgresql arrays allow you to enforce
-the data type. This can be critical in both maintaining
-the integrity of your data as well as optimization
-in your appliction code. This database enforeced type
-safety does not, however, enforce the dimensionality
-of the array.
+Compared to jsonb, postgresql arrays enforce the data type. This can be critical in both maintaining the integrity of your data as well as optimization in your appliction code.
+This database enforced type safety does not, however, enforce the dimensionality of
+the array.
** Indices on Arrays
:PROPERTIES:
- :ID: b8b2a4c0-60c7-429b-9bcd-f98763438b7d
+ :ID: array-indices
:END:
It is highly recommended that you use GIN or GIST indexes to search
for items in array column. You should remember that GIST indices are
@@ -215,7 +217,7 @@ lossy while GIN indices are lossless.
* S-SQL Array Support
:PROPERTIES:
- :ID: aa6cb2e2-fc06-4948-92e7-3f378aca5ee3
+ :ID: s-sql-array-support
:END:
S-sql can feel a little messy with respect to arrays but that
is in large part because (a) sql dealing with arrays is messy and
@@ -292,28 +294,28 @@ Example:
#+END_SRC
** General Usage Examples
:PROPERTIES:
- :ID: 1b396eff-19d5-4653-be90-8c6406dffa60
+ :ID: s-sql-array-general-usage-examples
:END:
Just to make these usage examples really simple, we will use the
simplest use case version discussed above, with a tags array in a table
with the name of the item. In this case the name is the name of a
-receipe and the tags are ingredients that either go in the receipe
-or accompany the receipe.
+recipe and the tags are ingredients that either go in the recipe
+or accompany the recipe.
First to create the table and the indexes. The index on 'name is the
default B-tree index. The index on the tags is a GIN index.
#+BEGIN_SRC sql
-(query (:create-table receipes
+(query (:create-table recipes
((name :type text)
(tags :type text[] :default "{}"))))
-(query (:create-unique-index 'receipe-tags-id-name
- :on "receipes"
+(query (:create-unique-index 'recipe-tags-id-name
+ :on "recipes"
:fields 'name))
-(query (:create-index 'receipe-tags-id-tags
- :on "receipes"
+(query (:create-index 'recipe-tags-id-tags
+ :on "recipes"
:using gin
:fields 'tags))
#+END_SRC
@@ -322,7 +324,7 @@ passing in lisp arrays and it is automatically inserted in the table
as a postgresql array.
#+BEGIN_SRC lisp
(query (:insert-rows-into
- 'receipes
+ 'recipes
:columns 'name 'tags
:values
'(("Fattoush" #("greens" "pita bread" "olive oil" "garlic" "lemon" "salt" "spices"))
@@ -338,7 +340,6 @@ as a postgresql array.
("Baklava" #("filo dough" "honey" "nuts")))))
#+END_SRC
-
This will automatically insert the required square brackets into the sql statement
being passed to postgresql. This automatic translation between lisp and
postgresql arrays does not work where you need a postgresql function in a query.
@@ -356,8 +357,8 @@ S-sql version
First we can start by checking for records that have a specific tag
#+BEGIN_SRC lisp
-(query (:select 'receipe-id 'tags
- :from 'receipe-tags-array
+(query (:select 'recipe-id 'tags
+ :from 'recipe-tags-array
:where (:@> 'tags
(:array[] "bulgur"))))
(("Tabbouleh" #("bulgur" "tomatoes" "onions" "parsley"))
@@ -371,8 +372,8 @@ a lisp array.
Extending this to checking for items with two specific tags:
#+BEGIN_SRC lisp
-(query (:select 'receipe-id 'tags
- :from 'receipe-tags-array
+(query (:select 'recipe-id 'tags
+ :from 'recipe-tags-array
:where (:@> 'tags
(:array[] "bulgur" "parsley"))))
@@ -384,7 +385,7 @@ acts as an 'or' logical test:
#+BEGIN_SRC lisp
(let ((tst-arry #("parsley" "cheese")))
(query (:order-by (:select '*
- :from 'receipes
+ :from 'recipes
:where (:&& 'tags tst-arry))
'name)))
'(("Manakeesh"
@@ -400,7 +401,7 @@ acts as an 'or' logical test:
Validating that this is returning a vector:
#+BEGIN_SRC lisp
(type-of (query (:select 'tags
- :from 'receipes
+ :from 'recipes
:where (:= 'name "Manakeesh"))
:single))
@@ -409,7 +410,7 @@ Validating that this is returning a vector:
We can also check the length of the array or cardinality:
#+BEGIN_SRC lisp
(query (:select (:cardinality 'tags)
- :from 'receipes
+ :from 'recipes
:where (:= 'name "Manakeesh"))
:single)
#+END_SRC
@@ -417,7 +418,7 @@ We can also check the length of the array or cardinality:
Updating the array can be done either explicitly:
#+BEGIN_SRC lisp
;;; Update array with an lisp array (changing onion to onions in the one row where it is singular
-(query (:update 'receipes
+(query (:update 'recipes
:set 'tags #("raw meat" "bulgur" "onions" "spices" "pita bread")
:where (:= 'name "Kibbe nayeh")))
#+END_SRC
@@ -425,7 +426,7 @@ or passing in a lisp variable:
#+BEGIN_SRC lisp
;;; checking passing a lisp array as a variable
(let ((lisp-arry #("wine" "garlic" "soy sauce")))
- (query (:update 'receipes
+ (query (:update 'recipes
:set 'tags '$1
:where (:= 'name 11))
lisp-arry))
@@ -440,13 +441,13 @@ starting point).
#+BEGIN_SRC lisp
(query (:select (:[] 'tags 2)
- :from 'receipes
+ :from 'recipes
:where (:= 'name 3)))
'(("olive oil"))
(query (:select (:[] 'tags 2 3)
- :from 'receipes
+ :from 'recipes
:where (:= 'name 3)))
'((#("olive oil" "eggplant")))
@@ -491,7 +492,7 @@ the distinct tags in a list of lists where every list has a single tag entry.
#+BEGIN_SRC lisp
(query (:order-by
(:select (:as (:unnest 'tags) 'tag) :distinct
- :from 'receipes)
+ :from 'recipes)
'tag))
'(("bulgur") ("cheese") ("chicken") ("chickpeas") ("cucumbers") ("eggplant")
@@ -508,7 +509,7 @@ We can use with and group-by operators to count the unique tags:
(:with
(:as 'p
(:select (:as (:unnest 'tags) 'tag)
- :from 'receipes))
+ :from 'recipes))
(:select 'tag (:as (:count 'tag) 'cnt)
:from 'p
:group-by 'tag))
@@ -524,11 +525,11 @@ We can use with and group-by operators to count the unique tags:
Yes, there are array-append, array-replace etc operators
#+BEGIN_SRC lisp
-(query (:update 'receipes
+(query (:update 'recipes
:set 'tags (:array-append 'tags "appended-items")
:where (:= 'name "Kibbe nayeh")))
-(query (:update 'receipes
+(query (:update 'recipes
:set 'tags (:array-replace 'tags "spices" "chocolate")))
#+END_SRC
The above two versions checked all the row, even those without the target string,
@@ -537,7 +538,7 @@ effectively the equivalent of not using the index.
You can use a different operator that more effectively uses the GIN index and
just touches the rows with the targeted string in the array:
#+BEGIN_SRC lisp
-(query (:update 'receipes
+(query (:update 'recipes
:set 'tags (:array-replace 'tags "chocolate" "spices")
:where (:<@ "{\"chocolate\"}" 'tags)))
#+END_SRC
@@ -551,27 +552,27 @@ To show the difference, look at the sql statements that are generated by the two
operators :any* and :any
#+BEGIN_SRC lisp
(sql (:select '*
- :from 'receipes
+ :from 'recipes
:where (:= "chicken" (:any* 'tags ))))
-"(SELECT * FROM receipes WHERE (E'chicken' = ANY(tags)))"
+"(SELECT * FROM recipes WHERE (E'chicken' = ANY(tags)))"
(sql (:select '*
- :from 'receipes
+ :from 'recipes
:where (:= "chicken" (:any 'tags ))))
-"(SELECT * FROM receipes WHERE (E'chicken' = ANY tags))"
+"(SELECT * FROM recipes WHERE (E'chicken' = ANY tags))"
#+END_SRC
In the following two cases we want to use ':any*'. In the first simple query,
-we are looking for everything in the rows where the name of the receipe is in
+we are looking for everything in the rows where the name of the recipe is in
the lisp array we passed in.
In the second query we look for all the rows where the string "chicken"
appears in any of the tag arrays.
#+BEGIN_SRC lisp
(query (:select '*
- :from 'receipes
+ :from 'recipes
:where (:= 'name (:any* '$1)))
#("Trout" "Shish Taouk" "Hamburger"))
@@ -580,7 +581,7 @@ appears in any of the tag arrays.
"pita bread")))
(query (:select '*
- :from 'receipes
+ :from 'recipes
:where (:= '$1 (:any* 'tags )))
"chicken")
@@ -594,7 +595,7 @@ the or operator which looks like :&&.
#+BEGIN_SRC lisp
(query (:order-by
(:select '*
- :from 'receipes
+ :from 'recipes
:where (:&& 'tags (:array[] '$1 '$2)))
'name)
"parsley" "cheese")
@@ -622,7 +623,7 @@ We are looking for rows from the database table which contain the elements
of an array composed of the two strings passed in as parameters.
#+BEGIN_SRC lisp
(query (:order-by
- (:select '* :from 'receipes
+ (:select '* :from 'recipes
:where (:<@ (:array[] '$1 '$2)
'tags))
'name)
@@ -633,7 +634,7 @@ of an array composed of the two strings passed in as parameters.
"olives")))
(query (:order-by
- (:select '* :from 'receipes
+ (:select '* :from 'recipes
:where (:@> 'tags
(:array[] '$1 '$2)))
'name)
@@ -649,7 +650,7 @@ the small two element array we are passing in. The answer is nil.
#+BEGIN_SRC lisp
(query (:order-by
- (:select '* :from 'receipes
+ (:select '* :from 'recipes
:where (:@> (:array[] '$1 '$2)
'tags))
'name)
@@ -658,7 +659,7 @@ the small two element array we are passing in. The answer is nil.
nil
(query (:order-by
- (:select '* :from 'receipes
+ (:select '* :from 'recipes
:where (:<@ 'tags
(:array[] '$1 '$2)))
'name)
@@ -666,13 +667,17 @@ nil
nil
#+END_SRC
+* Dao Class Support for Arrays
+ :PROPERTIES:
+ :ID: dao-class-support
+ :END:
* Array Operators
:PROPERTIES:
- :ID: 0b311d6a-57e5-44b3-aa03-321274cd4800
+ :ID: array-operators
:END:
** Array Comparison Operators
:PROPERTIES:
- :ID: cbdab925-dd3b-4531-8844-29f5f5e61b01
+ :ID: array-comparison-operators
:END:
Per postgresql [[https://www.postgresql.org/docs/current/static/functions-array.html][documentation]] array comparisons compare the array contents
@@ -1121,7 +1126,7 @@ Like all the aggregate functions, you can pass :filter, :distinct or :order-by
(in that order) as additional parameters.
#+BEGIN_SRC lisp
-(query (:select (:array-agg 'name) :from 'receipes) :single)
+(query (:select (:array-agg 'name) :from 'recipes) :single)
#("Fattoush" "Shawarma" "Baba Ghanoush" "Shish Taouk" "Kibbe nayeh" "Manakeesh"
"Fakafek" "Tabbouleh" "Kofta" "Kunafeh" "Baklava")
diff --git a/doc/dao-classes.html b/doc/dao-classes.html
index c587ad6..1202515 100644
--- a/doc/dao-classes.html
+++ b/doc/dao-classes.html
@@ -1,7 +1,7 @@
-
+
DAO Classes
@@ -227,7 +227,7 @@
-The name and score slots cannot be null because :col-type does not provide for db-null
-as an optiona. The creditcard slot can be null and actually defaults to null.
+In our example, the name and score slots cannot be null because :col-type does not provide for db-null as an optiona. The creditcard slot can be null and actually defaults to null.
The :col-default :null specification ensures that the default in the database for
this field is null, but it does not bound the slot to a default form. Thus, making
an instance of the class without initializing this slot will leave it in an unbound
state.
+
+The payment-history slot is matched to a Postgresql column named payment_history
+(remember that Postgresql uses underscores rather than hyphens) and that Postgresql
+column is an array of integers. If we wanted a two dimensional array of integers,
+the col-type would look like:
+
+
+
:col-type (or (array (array integer)) db-null)
+
+
+
+If the value contained in the Postgresql slot payment-history is a common lisp array,
+Postmodern will seamless handle the conversion to and from the common lisp array and
+the Postgresql array.
+
+
An example of a class where the keys are set as multiple column keys is here:
@@ -429,7 +446,15 @@
Basic Dao Definition Examples
We also specified that the table name is not "country" but "countries".
(Some style guides recommend that table names be plural and references to rows
-be singular.)
+be singular.) NOTE: You can provide a fully qualified table name. In other words,
+if you have
+
+
+
(:table-name a.countries)
+
+
+
+Postmodern will look for the countries table in the "A" schema.
@@ -593,7 +618,7 @@
Slot Options
(defclasstest-data-col-identity-with-references ()
- ((id :col-type integer :col-identity t :accessor id)
+ ((id :col-type integer :col-identity t :accessor id :col-primary-key t)
(username :col-type text :unique t :initarg:username:accessor username)
(department-id :col-type integer :col-references ((departments id))
:initarg:department-id:accessor department-id))
@@ -663,47 +688,71 @@
Dao Export and Import Functions (Postmo
-Consider the following dao-class definition. We have added additional column keyword parameters :col-export and :col-import. These parameters refer to functions which will convert the values from that slot to a valid Postgresql type (in our example, a string) on export to the database and from that Postgresql type to the type we want in this slot on import from the database..
+Consider the following dao-class definition. We have added additional column keyword parameters :col-export and :col-import. These parameters refer to functions which will convert the values from that slot to a valid Postgresql type (in our example, a string) on export to the database and from that Postgresql type to the type we want in this slot on import from the database.
+
+
+
+To make things slightly more interesting, we have two slots which are going to contain
+lists, but one will export to a Postgresql column that contains strings and the other
+will export to a Postgresql column that contains arrays of integers.
-Now we need to define the import functions. When writing your import functions, pay attention to how you want to handle nil or :NULL values as well as how you might want to error check the conversion from a Postgresql datatype to a CL datatype.
+Now we are going to define the import functions. When writing your import functions, pay attention to how you want to handle nil or :NULL values as well as how you might want to error check the conversion from a Postgresql datatype to a CL datatype. Just to show some of the
+differences, we are going to translate :NULL strings in Postgresql to :NULL in common lisp
+and we are going to translate :NULL arrays in Postgresql to nil in common lisp.
-
(defunstring-to-list (str)
+
(defunstring->list (str)
"Take a string representation of a list and return a lisp list.
-Note that you need to handle :NULLs."
+ Note that you need to handle :NULLs."
(cond ((eq str :NULL)
:NULL)
(str
- (with-input-from-string (s str)
- (read s)))
+ (with-input-from-string (s str) (read s)))
+ (t nil)))
+
+(defunarray->list (arry)
+ "Here we have decided that we want the list be be nil rather than :NULL if the array is empty."
+ (cond ((eq arry :NULL)
+ nil)
+ ((vectorp arry)
+ (coerce arry 'list))
(t nil)))
-And now we need to define the export function. In our example we are just going to be using format to write the CL value to a string. You are responsible for writing an export function that does what you need. This example just tells Postgresql to insert a string "unknown" if the slot value is not a list. In real life you would need more error checking and condition handling.
+And now the export functions. In our example we are just going to be using format to write the CL value to a string unless it is not a list. You are responsible for writing an export function that does what you need. This example just tells Postgresql to insert :NULL if the slot value is not a list. In real life you would need more error checking and condition handling.
+
+
+
+The list to array export function inserts :NULL if not a list and otherwise coerces the
+list to a vector so that Postgresql will accept it as an array.
-
(defunlist-to-string (val)
- "Simply uses (format ..) to write a list out as a string"
- (if (listp val)
- (format nil "~a" val)
- "unknown"))
+
(defunlist->string (lst)
+ "Here we have decided to insert :null if the input list is nil."
+ (if (listp lst)
+ (format nil "~a" lst)
+ :null))
+
+(defunlist->arr (lst)
+ (if (null lst)
+ :null
+ (coerce lst 'vector)))
diff --git a/doc/dao-classes.org b/doc/dao-classes.org
index 8ce4171..1c3d3a2 100644
--- a/doc/dao-classes.org
+++ b/doc/dao-classes.org
@@ -28,12 +28,14 @@ slots in these classes will refer to columns in the database.
** Basic Dao Definition Examples
A simple dao definition could look like this:
#+BEGIN_SRC lisp
- (defclass users ()
- ((name :col-type string :initarg :name :accessor name)
- (creditcard :col-type (or db-null integer) :initarg :card :col-default :null)
- (score :col-type bigint :col-default 0 :accessor score))
- (:metaclass dao-class)
- (:keys name))
+ (defclass users ()
+ ((name :col-type string :initarg :name :accessor name)
+ (creditcard :col-type (or db-null integer) :initarg :card :col-default :null)
+ (score :col-type bigint :col-default 0 :accessor score)
+ (payment-history :col-type (or (array integer) db-null)
+ :initarg :payment-history :accessor payment-history))
+ (:metaclass dao-class)
+ (:keys name))
#+END_SRC
In this case the name of the users will be treated as the primary key (the :keys
parameter at the end) and the database table is assumed to be named users because
@@ -42,13 +44,23 @@ that is the name of the class and there was no :table-name parameter provided.
reserved words, while possible using quotes, is generally not worth the additional
trouble they cause.)
-The name and score slots cannot be null because :col-type does not provide for db-null
-as an optiona. The creditcard slot can be null and actually defaults to null.
+In our example, the name and score slots cannot be null because :col-type does not provide for db-null as an optiona. The creditcard slot can be null and actually defaults to null.
The :col-default :null specification ensures that the default in the database for
this field is null, but it does not bound the slot to a default form. Thus, making
an instance of the class without initializing this slot will leave it in an unbound
state.
+The payment-history slot is matched to a Postgresql column named =payment_history=
+(remember that Postgresql uses underscores rather than hyphens) and that Postgresql
+column is an array of integers. If we wanted a two dimensional array of integers,
+the col-type would look like:
+#+begin_src lisp
+:col-type (or (array (array integer)) db-null)
+#+end_src
+If the value contained in the Postgresql slot payment-history is a common lisp array,
+Postmodern will seamless handle the conversion to and from the common lisp array and
+the Postgresql array.
+
An example of a class where the keys are set as multiple column keys is here:
#+BEGIN_SRC lisp
(defclass points ()
@@ -106,7 +118,12 @@ Now you can see why the double parens.
We also specified that the table name is not "country" but "countries".
(Some style guides recommend that table names be plural and references to rows
-be singular.)
+be singular.) NOTE: You can provide a fully qualified table name. In other words,
+if you have
+#+begin_src lisp
+(:table-name a.countries)
+#+end_src
+Postmodern will look for the countries table in the "A" schema.
When inheriting from DAO classes, a subclass' set of columns also contains all
the columns of its superclasses. The primary key for such a class is the union
@@ -241,7 +258,7 @@ The slot definitions in a table have several additional optional keyword paramet
the departments table:
#+begin_src lisp
(defclass test-data-col-identity-with-references ()
- ((id :col-type integer :col-identity t :accessor id)
+ ((id :col-type integer :col-identity t :accessor id :col-primary-key t)
(username :col-type text :unique t :initarg :username :accessor username)
(department-id :col-type integer :col-references ((departments id))
:initarg :department-id :accessor department-id))
@@ -292,40 +309,60 @@ There may be times when the types of values in a dao slot do not have comparable
One method would be to use text columns or jsonb columns in Postgresql and have functions that convert as necessary going back and forth. In the following example we will use text columns in Postgresql and write CL list data to string when we "export" the data to Postgresql and then convert from string when we "import" the data from Postgresql into a dao-class instance.
-Consider the following dao-class definition. We have added additional column keyword parameters :col-export and :col-import. These parameters refer to functions which will convert the values from that slot to a valid Postgresql type (in our example, a string) on export to the database and from that Postgresql type to the type we want in this slot on import from the database..
+Consider the following dao-class definition. We have added additional column keyword parameters :col-export and :col-import. These parameters refer to functions which will convert the values from that slot to a valid Postgresql type (in our example, a string) on export to the database and from that Postgresql type to the type we want in this slot on import from the database.
+
+To make things slightly more interesting, we have two slots which are going to contain
+lists, but one will export to a Postgresql column that contains strings and the other
+will export to a Postgresql column that contains arrays of integers.
#+begin_src lisp
-(defclass listy ()
- ((id :col-type integer :col-identity t :accessor id)
- (name :col-type text :col-unique t :col-check (:<> 'name "")
- :initarg :name :accessor name)
- (rlist :type list :col-type (or text db-null) :initarg :rlist :accessor rlist
- :col-export list-to-string :col-import string-to-list)
- (alist :type alist :col-type (or text db-null) :initarg :alist :accessor alist
- :col-export list-to-string :col-import string-to-alist)
- (plist :type plist :col-type (or text db-null) :initarg :plist :accessor plist
- :col-export list-to-string :col-import string-to-plist))
- (:metaclass dao-class)
- (:table-name listy))
+ (defclass listy ()
+ ((id :col-type integer :col-identity t :accessor id :col-primary-key t)
+ (name :col-type text :col-unique t :col-check (:<> 'name "")
+ :initarg :name :accessor name)
+ (r-list :col-type (or text db-null) :initarg :r-list :accessor r-list
+ :col-export list->string :col-import string->list)
+ (l-array :col-type (or (array integer) db-null)
+ :initarg :l-array :accessor l-array
+ :col-export list->arr :col-import array->list))
+ (:metaclass dao-class)
+ (:table-name listy))
#+end_src
-Now we need to define the import functions. When writing your import functions, pay attention to how you want to handle nil or :NULL values as well as how you might want to error check the conversion from a Postgresql datatype to a CL datatype.
+Now we are going to define the import functions. When writing your import functions, pay attention to how you want to handle nil or :NULL values as well as how you might want to error check the conversion from a Postgresql datatype to a CL datatype. Just to show some of the
+differences, we are going to translate :NULL strings in Postgresql to :NULL in common lisp
+and we are going to translate :NULL arrays in Postgresql to nil in common lisp.
#+begin_src lisp
- (defun string-to-list (str)
+ (defun string->list (str)
"Take a string representation of a list and return a lisp list.
- Note that you need to handle :NULLs."
+ Note that you need to handle :NULLs."
(cond ((eq str :NULL)
:NULL)
(str
- (with-input-from-string (s str)
- (read s)))
+ (with-input-from-string (s str) (read s)))
+ (t nil)))
+
+ (defun array->list (arry)
+ "Here we have decided that we want the list be be nil rather than :NULL if the array is empty."
+ (cond ((eq arry :NULL)
+ nil)
+ ((vectorp arry)
+ (coerce arry 'list))
(t nil)))
#+end_src
-And now we need to define the export function. In our example we are just going to be using format to write the CL value to a string. You are responsible for writing an export function that does what you need. This example just tells Postgresql to insert a string "unknown" if the slot value is not a list. In real life you would need more error checking and condition handling.
+And now the export functions. In our example we are just going to be using format to write the CL value to a string unless it is not a list. You are responsible for writing an export function that does what you need. This example just tells Postgresql to insert :NULL if the slot value is not a list. In real life you would need more error checking and condition handling.
+
+The list to array export function inserts :NULL if not a list and otherwise coerces the
+list to a vector so that Postgresql will accept it as an array.
#+begin_src lisp
- (defun list-to-string (val)
- "Simply uses (format ..) to write a list out as a string"
- (if (listp val)
- (format nil "~a" val)
- "unknown"))
+ (defun list->string (lst)
+ "Here we have decided to insert :null if the input list is nil."
+ (if (listp lst)
+ (format nil "~a" lst)
+ :null))
+
+ (defun list->arr (lst)
+ (if (null lst)
+ :null
+ (coerce lst 'vector)))
#+end_src
** method dao-keys (class)
diff --git a/doc/json-from-postgres.html b/doc/json-from-postgres.html
new file mode 100644
index 0000000..a47ca3e
--- /dev/null
+++ b/doc/json-from-postgres.html
@@ -0,0 +1,777 @@
+
+
+
+
+
+
+Json From Postgresql/Postmodern
+
+
+
+
+
+
+
+
+
+
+
Json From Postgresql/Postmodern
+
+
+
+
Intro
+
+
+Suppose the front end of an app needs data as a json string and you need to get the data out of a database and convert it to that format. There are several ways to do that. We will look at doing it with basic sql, s-sql and a dao class. For purposes of this note, we are not looking at jsonb type columns in Postgresql.
+
+
+
+To make things a little more interesting, we are going to have a private column which we do not want to pass to the front-end, a Postgresql point datatype column and we will have a geometry type (using postgis) to compare that to the point type. If you do not have postgis installed, you can find installation instruction here: https://postgis.net/install/ or just read the the postgis stuff without trying to run the code.
+
+
+
+I am going to use the local-time library to deal with dates, so we need to do a little housework on that side as well.
+
+One difference to note is that the data for the lat_long point data type has a comma, but the geom geometry data type does not have a comma. No, I do not know why the syntax difference.
+
+
+
+I want to flag something that can surprise people. The lat_long column is a Postgresql point datatype. That means it is an array. As you may recall, Postgresql arrays start at 1, not 0. Except here. If you wanted just the latitude for the row with the employee_id of 1, you would actually call for array 0.
+
+
+
(pomo:query "select lat_long[0] from employees where employee_id=1":single)
+59.334591d0
+
+
+
+If you wanted to get the latitude and longitude in a list, it would look like:
+
+
+
(pomo:query "select lat_long[0], lat_long[1] from employees where employee_id=1")
+((59.334591d0 18.06324d0))
+
+
+
+
+If you ran a basic query to see how Postgresql was storing that geometry type, it would look something like this:
+
+
+
+
(pomo:query "select geom from employees where employee_id=1":single)
+"0101000020E61000009A44BDE0D3AA4D408ECC237F30103240"
+
+
+
+To actually get the separate latitude and longitude from the geom column, you need to use Postgresql functions st_x and st_y like so:
+
+
+
(query "select st_x(geom), st_y(geom) from employees where employee_id=1")
+((59.334591d0 18.06324d0))
+
+
+
+
+Now on to getting this information as json. Postgresql gives you a json generator function that takes a tuple and returns a json dictionary. So, for example:
+
+You can see that it would automatically break out the geom data. However, as written, it has the fatal flaw of also collecting the private info. That can get solved with a slight modification:
+
+You can also aggregate rows using the Postgresql json_agg function.
+
+
+
(query "select json_agg(e)
+ from
+ (select employee_id, department_id, name, start_date, contact, lat_long, geom
+ from employees)
+ e")
+
+
+
+You could skip the Postgresql json function and ask Postmodern to return the query as a json object expressed as a string. One thing to note is that Postmodern will return the labels as camelCase rather than Postgresql returning them as underscores:
+
+
+
(query "select employee_id, department_id, name, start_date, contact, lat_long, geom
+ from employees
+ where employee_id=1"
+ :json-str)
+"{\"employeeId\":1,\"departmentId\":1,\"name\":\"Maja\",\"startDate\":\"{2018-09-01T20:00:00.000000-04:00}\",\"contact\":[\"084-767-734\",\"071-334-8473\"],\"latLong\":[59.334591,18.06324],\"geom\":\"0101000020E61000009A44BDE0D3AA4D408ECC237F30103240\"}"
+
+
+
+You would need to do a little more work in order to get the desired latitude and longitude out of the geom value.
+
+
+
(query "select employee_id, department_id, name, start_date, contact, lat_long, st_x(geom) as lat, st_y(geom) as long
+ from employees where employee_id=1":json-str)
+
+"{\"employeeId\":1,\"departmentId\":1,\"name\":\"Maja\",\"startDate\":\"{2018-09-01T20:00:00.000000-04:00}\",\"contact\":[\"084-767-734\",\"071-334-8473\"],\"latLong\":[59.334591,18.06324],\"lat\":59.334591,\"long\":18.06324}"
+
+
+
+Both the Postgresql function and the Postmodern return type approach can be applied to the end result of more complicated queries with joins, CTEs and other tools of the trade. Which is actually why we have the department table in this example. Instead of having the department-id in the json we are sending to the front end, let's have the department name.
+
+
+
+First the using the Postgresql row-to-json function:
+
+
+
(query "select row_to_json(e)
+ from (select employee_id, departments.name as department_name, employees.name as employee_name,
+ start_date, contact, lat_long, geom
+ from employees
+ left join departments
+ on departments.department_id = employees.department_id
+ where employee_id=1) e")
+(("{\"employee_id\":1,\"department_name\":\"spatial\",\"employee_name\":\"Maja\",\"start_date\":\"2018-09-02\",\"contact\":[\"084-767-734\",\"071-334-8473\"],\"lat_long\":\"(59.334591,18.06324)\",\"geom\":{\"type\":\"Point\",\"coordinates\":[59.334591,18.06324]}}"))
+
+
+
+Now the sql using the Postmodern :json-str keyword parameter for query:
+
+
+
(query "select employee_id, departments.name as department_name, employees.name as employee_name,
+ start_date, contact, lat_long, geom
+ from employees
+ left join departments
+ on departments.department_id = employees.department_id
+ where employee_id=1"
+ :json-str))
+"{\"employeeId\":1,\"departmentName\":\"spatial\",\"employeeName\":\"Maja\",\"startDate\":\"{2018-09-01T20:00:00.000000-04:00}\",\"contact\":[\"084-767-734\",\"071-334-8473\"],\"latLong\":[59.334591,18.06324],\"geom\":\"0101000020E61000009A44BDE0D3AA4D408ECC237F30103240\"}"
+
+
+
+
+
+
+
The Basic S-SQL Version
+
+
+Assuming you already have a database to use, let's create a couple of tables and insert some data.
+
+One difference to note is that the data for the lat_long point data type has a comma, but the geom geometry data type does not have a comma. No, I do not know why the syntax difference in Postgresql (Postmodern needs it to properly match Postgresql's syntax here).
+
+
+
+I want to flag something that can surprise people. The lat_long column is a Postgresql point datatype. That means it is an array. As you may recall, Postgresql arrays start at 1, not 0. Except here. If you wanted just the latitude for the row with the employee_id of 1, you would actually call for array 0.
+
+Now on to getting this information as json. Postgresql gives you a json generator function that takes a tuple and returns a json dictionary. So, for example:
+
+You can see that it would automatically break out the geom data. However, as written, it has the fatal flaw of also collecting the private info. That can get solved with a slight modification:
+
+You could skip the Postgresql json function and ask Postmodern to return the query as a json object expressed as a string. One thing to note is that Postmodern will return the labels as camelCase rather than Postgresql returning them as underscores:
+
+Both the Postgresql function and the Postmodern return type approach can be applied to the end result of more complicated queries with joins, CTEs and other tools of the trade. Which is actually why we have the department table in this example. Instead of having the department-id in the json we are sending to the front end, let's have the department name.
+
+
+
+First the s-sql using the Postgresql row-to-json function:
+
+Assuming you already have a database to use, let's create a couple of dao classes, their associated tables and insert some data. Assume we decide we want to keep the geom as a list of latitude and longitude in the geom slot. That means we need import and export functions.
+
+One difference to note is that the data for the lat_long point data type has a comma, but the geom geometry data type does not have a comma. No, I do not know why the syntax difference.
+
+
+
+Now the problem. If you ran a basic query to see how Postgresql was storing that geometry type, it would look something like this:
+
+
+
+
(pomo:query "select geom from employees where employee_id=1":single)
+"0101000020E61000009A44BDE0D3AA4D408ECC237F30103240"
+
+
+
+
+We need import and export functions that implement the opengis specification in order to implement the import and export functions for the geom slot. See https://www.ogc.org/standards/sfs. Fortunately J.P. Larocue created the cl-wkb package (accessed via quicklisp with quickloading the
+cl-ewkb system) and we can create an import function with a combination of using ironclad's hex-string-to-byte-array and cl-wkb's decode function. So let's do that.
+
+
+
(defungeom->wkb-point (input)
+ "Takes a hexstring that represents a geometry point from postgresql and returns a cl-wkb:point class instance"
+ (cl-wkb:decode (ironclad:hex-string-to-byte-array input)))
+
+
+
+Now we can check whether we succeeded by seeing whether the x point is the latitude we expected:
+
+Looking at the result, we have two issues. First, the start date seems to have lost its senses. Second, it is collecting and passing on the private data to the front end, which we explicitly did not want to do.
+
+That works, so it is something on the cl-json side that we will have to work around. Let's turn to the private data issue.
+
+
+
+One solution would be to create a dao-class that is only a subset of the employees table (minus the private data) and set pomo:*ignore-unknonw-columns* to t. (If we did not set pomo:*ignore-unknonw-columns*, we would generate an error complaining that the dao
+was not in sync with the table.) Let's do that:
+
+If you are using a different CL json library, you would have to write your own functions to convert from a dao-class object to something that, e.g. jonathan or jsown could use.
+
+
+
diff --git a/doc/json-from-postgres.org b/doc/json-from-postgres.org
new file mode 100644
index 0000000..36964b4
--- /dev/null
+++ b/doc/json-from-postgres.org
@@ -0,0 +1,420 @@
+#+TITLE: Json From Postgresql/Postmodern
+#+OPTIONS: num:nil
+#+HTML_HEAD:
+#+HTML_HEAD:
+#+OPTIONS: ^:nil
+#+OPTIONS: toc:2
+
+* Intro
+
+Suppose the front end of an app needs data as a json string and you need to get the data out of a database and convert it to that format. There are several ways to do that. We will look at doing it with basic sql, s-sql and a dao class. For purposes of this note, we are not looking at jsonb type columns in Postgresql.
+
+To make things a little more interesting, we are going to have a private column which we do not want to pass to the front-end, a Postgresql point datatype column and we will have a geometry type (using postgis) to compare that to the point type. If you do not have postgis installed, you can find installation instruction here: [[https://postgis.net/install/]] or just read the the postgis stuff without trying to run the code.
+
+I am going to use the local-time library to deal with dates, so we need to do a little housework on that side as well.
+#+begin_src lisp
+(ql:quickload '(local-time cl-postgres+local-time))
+(local-time:set-local-time-cl-postgres-readers)
+#+end_src
+
+* The Basic SQL Version
+:PROPERTIES:
+:CUSTOM_ID: sql-version
+:END:
+Assuming you already have a database to use, let's create a couple of tables and insert some data.
+#+begin_src lisp
+ (pomo:query "CREATE TABLE departments (
+ department_id bigint primary key,
+ name text
+ )")
+
+ (pomo:query "CREATE TABLE employees (
+ employee_id serial primary key,
+ department_id integer references departments(department_id),
+ name text,
+ start_date date,
+ contact text[],
+ private text,
+ lat_long point,
+ geom geometry(point, 4326)json-from-p
+ );")
+
+ (pomo:query "INSERT INTO departments
+ (department_id, name)
+ VALUES
+ (1, 'spatial'),
+ (2, 'cloud')")
+
+ (pomo:query "INSERT INTO employees
+ (department_id, name, start_date, contact, private, lat_long, geom)
+ VALUES
+ (1, 'Maja', '2018/09/02', '{"084-767-734","071-334-8473"}', 'not allowed',
+ '(59.334591, 18.063240)', 'POINT(59.334591 18.063240)'),
+ (1, 'Liam', '2019/09/02', '{"084-767-734","071-334-8472"}','private',
+ '(57.708870, 11.974560)','POINT(57.708870 11.974560)'),
+ (2, 'Matteo', '2019/11/01', '{"084-767-734","071-334-8476"}', 'burn before reading',
+ '(58.283489,12.285821)','POINT(58.283489 12.285821)'),
+ (2, 'Astrid', '2020/10/01', '{"084-767-734","071-334-8465"}', 'abandon all hope',
+ '(57.751442, 16.628838)', 'POINT(57.751442 16.628838)');")
+#+end_src
+One difference to note is that the data for the lat_long point data type has a comma, but the geom geometry data type does not have a comma. No, I do not know why the syntax difference.
+
+I want to flag something that can surprise people. The lat_long column is a Postgresql point datatype. That means it is an array. As you may recall, Postgresql arrays start at 1, not 0. Except here. If you wanted just the latitude for the row with the employee_id of 1, you would actually call for array 0.
+#+begin_src lisp
+ (pomo:query "select lat_long[0] from employees where employee_id=1" :single)
+ 59.334591d0
+#+end_src
+If you wanted to get the latitude and longitude in a list, it would look like:
+#+begin_src lisp
+(pomo:query "select lat_long[0], lat_long[1] from employees where employee_id=1")
+((59.334591d0 18.06324d0))
+#+end_src
+
+If you ran a basic query to see how Postgresql was storing that geometry type, it would look something like this:
+
+#+begin_src lisp
+ (pomo:query "select geom from employees where employee_id=1" :single)
+"0101000020E61000009A44BDE0D3AA4D408ECC237F30103240"
+#+end_src
+To actually get the separate latitude and longitude from the geom column, you need to use Postgresql functions st_x and st_y like so:
+#+begin_src lisp
+(query "select st_x(geom), st_y(geom) from employees where employee_id=1")
+((59.334591d0 18.06324d0))
+#+end_src
+
+Now on to getting this information as json. Postgresql gives you a json generator function that takes a tuple and returns a json dictionary. So, for example:
+#+begin_src lisp
+(query "select row_to_json(employees) from employees where employee_id=1")
+(("{\"employee_id\":1,
+ \"department_id\":1,
+ \"name\":\"Maja\",
+ \"start_date\":\"2018-09-02\",
+ \"contact\":[\"084-767-734\",\"071-334-8473\"],
+ \"private\":\"not allowed\",
+ \"lat_long\":\"(59.334591,18.06324)\",
+ \"geom\":{\"type\":\"Point\",\"coordinates\":[59.334591,18.06324]}}"))
+#+end_src
+You can see that it would automatically break out the geom data. However, as written, it has the fatal flaw of also collecting the private info. That can get solved with a slight modification:
+#+begin_src lisp
+(query "select row_to_json(e)
+ from
+ (select employee_id, department_id, name, start_date, contact, lat_long, geom
+ from employees where employee_id=1) e")
+(("{\"employee_id\":1,
+ \"department_id\":1,
+ \"name\":\"Maja\",
+ \"start_date\":\"2018-09-02\",
+ \"contact\":[\"084-767-734\",\"071-334-8473\"],
+ \"lat_long\":\"(59.334591,18.06324)\",
+ \"geom\":{\"type\":\"Point\",\"coordinates\":[59.334591,18.06324]}}"))
+#+end_src
+You can also aggregate rows using the Postgresql json_agg function.
+#+begin_src lisp
+(query "select json_agg(e)
+ from
+ (select employee_id, department_id, name, start_date, contact, lat_long, geom
+ from employees)
+ e")
+#+end_src
+You could skip the Postgresql json function and ask Postmodern to return the query as a json object expressed as a string. One thing to note is that Postmodern will return the labels as camelCase rather than Postgresql returning them as underscores:
+#+begin_src lisp
+ (query "select employee_id, department_id, name, start_date, contact, lat_long, geom
+ from employees
+ where employee_id=1"
+ :json-str)
+ "{\"employeeId\":1,\"departmentId\":1,\"name\":\"Maja\",\"startDate\":\"{2018-09-01T20:00:00.000000-04:00}\",\"contact\":[\"084-767-734\",\"071-334-8473\"],\"latLong\":[59.334591,18.06324],\"geom\":\"0101000020E61000009A44BDE0D3AA4D408ECC237F30103240\"}"
+#+end_src
+You would need to do a little more work in order to get the desired latitude and longitude out of the geom value.
+#+begin_src lisp
+ (query "select employee_id, department_id, name, start_date, contact, lat_long, st_x(geom) as lat, st_y(geom) as long
+ from employees where employee_id=1" :json-str)
+
+ "{\"employeeId\":1,\"departmentId\":1,\"name\":\"Maja\",\"startDate\":\"{2018-09-01T20:00:00.000000-04:00}\",\"contact\":[\"084-767-734\",\"071-334-8473\"],\"latLong\":[59.334591,18.06324],\"lat\":59.334591,\"long\":18.06324}"
+ #+end_src
+Both the Postgresql function and the Postmodern return type approach can be applied to the end result of more complicated queries with joins, CTEs and other tools of the trade. Which is actually why we have the department table in this example. Instead of having the department-id in the json we are sending to the front end, let's have the department name.
+
+First the using the Postgresql =row-to-json= function:
+#+begin_src lisp
+ (query "select row_to_json(e)
+ from (select employee_id, departments.name as department_name, employees.name as employee_name,
+ start_date, contact, lat_long, geom
+ from employees
+ left join departments
+ on departments.department_id = employees.department_id
+ where employee_id=1) e")
+ (("{\"employee_id\":1,\"department_name\":\"spatial\",\"employee_name\":\"Maja\",\"start_date\":\"2018-09-02\",\"contact\":[\"084-767-734\",\"071-334-8473\"],\"lat_long\":\"(59.334591,18.06324)\",\"geom\":{\"type\":\"Point\",\"coordinates\":[59.334591,18.06324]}}"))
+#+end_src
+Now the sql using the Postmodern :json-str keyword parameter for query:
+#+begin_src lisp
+(query "select employee_id, departments.name as department_name, employees.name as employee_name,
+ start_date, contact, lat_long, geom
+ from employees
+ left join departments
+ on departments.department_id = employees.department_id
+ where employee_id=1"
+ :json-str))
+"{\"employeeId\":1,\"departmentName\":\"spatial\",\"employeeName\":\"Maja\",\"startDate\":\"{2018-09-01T20:00:00.000000-04:00}\",\"contact\":[\"084-767-734\",\"071-334-8473\"],\"latLong\":[59.334591,18.06324],\"geom\":\"0101000020E61000009A44BDE0D3AA4D408ECC237F30103240\"}"
+#+end_src
+
+* The Basic S-SQL Version
+:PROPERTIES:
+:CUSTOM_ID: s-sql-version
+:END:
+Assuming you already have a database to use, let's create a couple of tables and insert some data.
+#+begin_src lisp
+ (pomo:query (:create-table 'departments
+ ((department-id :type (or pomo:db-null bigint) :primary-key t)
+ (name :type (or pomo:db-null text)))))
+
+ (pomo:query (:create-table employees
+ ((employee_id :type serial :primary-key t)
+ (department_id :type (or pomo:db-null integer) :references ((departments department_id)))
+ (name :type (or pomo:db-null text))
+ (start_date :type (or pomo:db-null date))
+ (contact :type (or pomo:db-null text[]))
+ (private :type (or pomo:db-null text))
+ (lat_long :type (or pomo:db-null point))
+ (geom :type (or pomo:db-null (geometry point 4326))))))
+
+ (pomo:query (:insert-rows-into 'departments
+ :columns 'deparment-id 'name
+ :values '((1 "spatial") (2 "cloud"))))
+
+(pomo:sql (:insert-rows-into 'employees
+ :columns 'department-id 'name 'start-date 'contact 'private 'lat_long 'geom
+ :values
+ '((1 "Maja" "2018/09/02" #("084-767-734""071-334-8473") "not allowed"
+ "(59.334591, 18.063240)" "POINT(59.334591 18.063240)")
+ (1 "Liam" "2019/09/02" #("084-767-734" "071-334-8472") "private"
+ "(57.708870, 11.974560)" "POINT(57.708870 11.974560)")
+ (2 "Matteo" "2019/11/01" #("084-767-734""071-334-8476") "burn before reading"
+ "(58.28348912.285821)" "POINT(58.283489 12.285821)")
+ (2 "Astrid" "2020/10/01" #("084-767-734""071-334-8465") "abandon all hope"
+ "(57.751442, 16.628838)" "POINT(57.751442 16.628838)"))))
+#+end_src
+One difference to note is that the data for the lat_long point data type has a comma, but the geom geometry data type does not have a comma. No, I do not know why the syntax difference in Postgresql (Postmodern needs it to properly match Postgresql's syntax here).
+
+I want to flag something that can surprise people. The lat_long column is a Postgresql point datatype. That means it is an array. As you may recall, Postgresql arrays start at 1, not 0. Except here. If you wanted just the latitude for the row with the employee_id of 1, you would actually call for array 0.
+#+begin_src lisp
+(pomo:query (:select (:[] 'lat_long 0) :from 'employees :where (:= 'employee_id 1)) :single)
+59.334591d0
+#+end_src
+If you wanted the latitude and longitude in alist, the query would look like:
+#+begin_src lisp
+ (pomo:query (:select (:[] 'lat_long 0) (:[] 'lat_long 1) :from 'employees :where (:= 'employee_id 1)))
+ ((59.334591d0 18.06324d0))
+#+end_src
+If you ran a basic query to see how Postgresql was storing that geometry type, it would look something like this:
+#+begin_src lisp
+ (pomo:query (:select 'geom :from 'employees :where (:= 'employee-id 1)) :single)
+ "0101000020E61000009A44BDE0D3AA4D408ECC237F30103240"
+#+end_src
+To actually get the separate latitude and longitude from the geom column, you need to use Postgresql functions st_x and st_y like so:
+#+begin_src lisp
+(with-connection *dba-connection* (query (:select (:st-x 'geom) (:st-y 'geom) :from 'employees :where (:= 'employee_id 1))))
+((59.334591d0 18.06324d0))
+#+end_src
+Now on to getting this information as json. Postgresql gives you a json generator function that takes a tuple and returns a json dictionary. So, for example:
+#+begin_src lisp
+(pomo:query (:select (:row-to-json 'employees) :from 'employees :where (:= 'employee-id 1)))
+ (("{\"employee_id\":1,
+ \"department_id\":1,
+ \"name\":\"Maja\",
+ \"start_date\":\"2018-09-02\",
+ \"contact\":[\"084-767-734\",\"071-334-8473\"],
+ \"private\":\"not allowed\",
+ \"lat_long\":\"(59.334591,18.06324)\",
+ \"geom\":{\"type\":\"Point\",\"coordinates\":[59.334591,18.06324]}}"))
+#+end_src
+You can see that it would automatically break out the geom data. However, as written, it has the fatal flaw of also collecting the private info. That can get solved with a slight modification:
+#+begin_src lisp
+ (query (:select (:row-to-json 'e)
+ :from (:as (:select 'employee-id 'department-id 'name 'start-date 'contact
+ 'lat-long 'geom
+ :from 'employees
+ :where (:= 'employee-id 1))
+ 'e)))
+ (("{\"employee_id\":1,
+ \"department_id\":1,
+ \"name\":\"Maja\",
+ \"start_date\":\"2018-09-02\",
+ \"contact\":[\"084-767-734\",\"071-334-8473\"],
+ \"lat_long\":\"(59.334591,18.06324)\",
+ \"geom\":{\"type\":\"Point\",\"coordinates\":[59.334591,18.06324]}}"))
+#+end_src
+You can also aggregate rows using the Postgresql json_agg function.
+#+begin_src lisp
+ (query (:select (:json-agg 'e)
+ :from (:as (:select 'employee-id 'department-id 'name 'start-date 'contact
+ 'lat-long 'geom
+ :from 'employees)
+ 'e)))
+#+end_src
+You could skip the Postgresql json function and ask Postmodern to return the query as a json object expressed as a string. One thing to note is that Postmodern will return the labels as camelCase rather than Postgresql returning them as underscores:
+#+begin_src lisp
+ (query (:select 'employee-id 'department-id 'name 'start-date 'contact 'lat-long 'geom
+ :from 'employees
+ :where (:= 'employee-id 1)) :json-str)
+
+"{\"employeeId\":1,\"departmentId\":1,\"name\":\"Maja\",\"startDate\":\"{2018-09-01T20:00:00.000000-04:00}\",\"contact\":[\"084-767-734\",\"071-334-8473\"],\"latLong\":[59.334591,18.06324],\"geom\":\"0101000020E61000009A44BDE0D3AA4D408ECC237F30103240\"}"
+#+end_src
+You would need to do a little more work in order to get the desired latitude and longitude out of the geom value.
+#+begin_src lisp
+ (query (:select 'employee-id 'department-id 'name 'start-date 'contact 'lat-long
+ (:st-x 'geom) (:st-y 'geom)
+ :from 'employees
+ :where (:= 'employee-id 1))
+ :json-str)
+ "{\"employeeId\":1,\"departmentId\":1,\"name\":\"Maja\",\"startDate\":\"{2018-09-01T20:00:00.000000-04:00}\",\"contact\":[\"084-767-734\",\"071-334-8473\"],\"latLong\":[59.334591,18.06324],\"stX\":59.334591,\"stY\":18.06324}"
+#+end_src
+Both the Postgresql function and the Postmodern return type approach can be applied to the end result of more complicated queries with joins, CTEs and other tools of the trade. Which is actually why we have the department table in this example. Instead of having the department-id in the json we are sending to the front end, let's have the department name.
+
+First the s-sql using the Postgresql =row-to-json= function:
+#+begin_src lisp
+ (query (:select (:row-to-json 'e)
+ :from (:as (:select 'employee-id (:as 'departments.name 'department_name)
+ (:as 'employees.name 'employee-name)
+ 'start-date 'contact 'lat-long
+ (:st-x 'geom) (:st-y 'geom)
+ :from 'employees
+ :left-join 'departments
+ :on (:= 'departments.department-id 'employees.department-id)
+ :where (:= 'employee-id 1))
+ 'e)))
+
+ (("{\"employee_id\":1,\"department_name\":\"spatial\",\"employee_name\":\"Maja\",\"start_date\":\"2018-09-02\",\"contact\":[\"084-767-734\",\"071-334-8473\"],\"lat_long\":\"(59.334591,18.06324)\",\"st_x\":59.334591,\"st_y\":18.06324}"))
+#+end_src
+Now the s-sql using the Postmodern :json-str keyword parameter for query:
+#+begin_src lisp
+ (query (:select 'employee-id (:as 'departments.name 'department-name)
+ (:as 'employees.name 'employee-name)
+ 'start-date 'contact 'lat-long (:st-x 'geom) (:st-y 'geom)
+ :from 'employees
+ :left-join 'departments
+ :on (:= 'departments.department-id 'employees.department-id)
+ :where (:= 'employee-id 1))
+ :json-str)
+ "{\"employeeId\":1,\"departmentName\":\"spatial\",\"employeeName\":\"Maja\",\"startDate\":\"{2018-09-01T20:00:00.000000-04:00}\",\"contact\":[\"084-767-734\",\"071-334-8473\"],\"latLong\":[59.334591,18.06324],\"stX\":59.334591,\"stY\":18.06324}"
+#+end_src
+
+* The Basic Dao-class Version
+:PROPERTIES:
+:CUSTOM_ID: dao-class-version
+:END:
+Assuming you already have a database to use, let's create a couple of dao classes, their associated tables and insert some data. Assume we decide we want to keep the geom as a list of latitude and longitude in the geom slot. That means we need import and export functions.
+#+begin_src lisp
+ (defclass departments ()
+ ((department-id :col-type serial :initarg :department-id :accessor department-id
+ :col-primary-key t)
+ (name :col-type (or text pomo:db-null) :initarg :name :accessor name))
+ (:metaclass pomo:dao-class))
+
+ (pomo:execute (dao-table-definition 'departments))
+
+ (defclass employees ()
+ ((employee-id :col-type serial :initarg :employee-id :accessor employee-id
+ :col-primary-key t)
+ (department-id :col-type integer :initarg :department-id :accessor department-id
+ :col-references ((departments department-id)))
+ (name :col-type text :initarg name :accessor name)
+ (start-date :col-type (or date pomo:db-null) :initarg start-date :accessor start-date)
+ (contact :col-type (or pomo:db-null (array text)) :initarg contact :accessor contact)
+ (private :col-type (or pomo:db-null text) :initarg private :accessor private)
+ (lat-long :col-type (or pomo:db-null point) :initarg lat-long :accessor lat-long)
+ (geom :col-type (or pomo:db-null (geometry point 4326)) :initarg geom :accessor geom
+ :col-import geom->wkb-point))
+ (:metaclass pomo:dao-class))
+
+ ;; make-doa creates an instance of the dao and saves it in the database
+ (pomo:make-dao 'departments :department-id 1 :name "spatial")
+ (pomo:make-dao 'departments :department-id 2 :name "cloud")
+
+ (pomo:make-dao 'employees :department-id 1 :name "Maja" :start-date "2018/09/02"
+ :contact #("084-767-734","071-334-8473")
+ :private "not allowed" :lat-long "(59.334591, 18.063240)"
+ :geom "POINT(59.334591 18.063240)")
+
+ (pomo:make-dao 'employees :department-id 1 :name "Liam" :start-date "2019/09/02"
+ :contact #("084-767-734","071-334-8472")
+ :private "private" :lat-long "(57.708870, 11.974560)"
+ :geom "POINT((57.708870 11.974560)")
+
+ (pomo:make-dao 'employees :department-id 2 :name "Matteo" :start-date "2019/11/01"
+ :contact #("084-767-734","071-334-8476")
+ :private "burn before reading" :lat-long "(58.283489, 12.285821)"
+ :geom "POINT(58.283489 12.285821)")
+
+ (pomo:make-dao 'employees :department-id 2 :name "Astrid" :start-date "2020/10/01"
+ :contact #("084-767-734","071-334-8465")
+ :private "abandon all hope" :lat-long "(57.751442, 16.628838)"
+ :geom "POINT(57.751442 16.628838)")
+ #+end_src
+One difference to note is that the data for the lat_long point data type has a comma, but the geom geometry data type does not have a comma. No, I do not know why the syntax difference.
+
+Now the problem. If you ran a basic query to see how Postgresql was storing that geometry type, it would look something like this:
+
+#+begin_src lisp
+ (pomo:query "select geom from employees where employee_id=1" :single)
+"0101000020E61000009A44BDE0D3AA4D408ECC237F30103240"
+#+end_src
+
+We need import and export functions that implement the opengis specification in order to implement the import and export functions for the geom slot. See [[https://www.ogc.org/standards/sfs]]. Fortunately J.P. Larocue created the cl-wkb package (accessed via quicklisp with quickloading the
+[[https://github.com/filonenko-mikhail/cl-ewkb][cl-ewkb system]]) and we can create an import function with a combination of using ironclad's hex-string-to-byte-array and cl-wkb's decode function. So let's do that.
+#+begin_src lisp
+ (defun geom->wkb-point (input)
+ "Takes a hexstring that represents a geometry point from postgresql and returns a cl-wkb:point class instance"
+ (cl-wkb:decode (ironclad:hex-string-to-byte-array input)))
+#+end_src
+Now we can check whether we succeeded by seeing whether the x point is the latitude we expected:
+#+begin_src lisp
+ (cl-wkb:x (geom (pomo:get-dao 'employees 1)))
+59.334591d0
+#+end_src
+
+We still need to get from the dao-class to json. You could do something like just run cl-json's =encode-json=function on a dao-object like so:
+#+begin_src lisp
+ (cl-json:encode-json (pomo:get-dao 'employees 1))
+
+ {"employeeId":1,
+ "departmentId":1,
+ "name":"Maja",
+ "startDate":{"day":6759,"sec":0,"nsec":0},
+ "contact":["084-767-734","071-334-8473"],
+ "private":"not allowed",
+ "latLong":[59.334591,18.06324],
+ "geom":{"geomtype":536870913,"srid":4326,"pointPrimitive":{"x":59.334591,"y":18.06324,"z":0.0,"m":0.0}}}
+#+end_src
+Looking at the result, we have two issues. First, the start date seems to have lost its senses. Second, it is collecting and passing on the private data to the front end, which we explicitly did not want to do.
+
+Just checking on the date situation:
+#+begin_src lisp
+(start-date (pomo:get-dao 'employees 1)))
+@2018-09-01T20:00:00.000000-04:00
+#+end_src
+That works, so it is something on the cl-json side that we will have to work around. Let's turn to the private data issue.
+
+One solution would be to create a dao-class that is only a subset of the employees table (minus the private data) and set =pomo:*ignore-unknonw-columns*= to t. (If we did not set =pomo:*ignore-unknonw-columns*=, we would generate an error complaining that the dao
+was not in sync with the table.) Let's do that:
+#+begin_src lisp
+ (defclass employees-minus-private ()
+ ((employee-id :col-type serial :initarg :employee-id :accessor employee-id :col-primary-key t)
+ (department-id :col-type integer :initarg :department-id :accessor department-id :col-references ((departments department-id)))
+ (name :col-type text :initarg name :accessor name)
+ (start-date :col-type (or date pomo:db-null) :initarg start-date :accessor start-date)
+ (contact :col-type (or pomo:db-null (array text)) :initarg contact :accessor contact)
+ (lat-long :col-type (or pomo:db-null point) :initarg lat-long :accessor lat-long)
+ (geom :col-type (or pomo:db-null (geometry point 4326)) :initarg geom :accessor geom
+ :col-import geom->wkb-point))
+ (:table-name employees)
+ (:metaclass pomo:dao-class))
+
+ (setf pomo:*IGNORE-UNKNOWN-COLUMNS* t)
+ #+end_src
+ And now cl-json generates a json string without the
+ #+begin_src lisp
+ (cl-json:encode-json (pomo:get-dao 'employees-minus-private 1))
+ {"employeeId":1,"departmentId":1,"name":"Maja","startDate":3744835200,"contact":["084-767-734","071-334-8473"],"latLong":[59.334591,18.06324],"geom":{"geomtype":536870913,"srid":4326,"pointPrimitive":{"x":59.334591,"y":18.06324,"z":0.0,"m":0.0}}}
+ #+end_src
+If you are using a different CL json library, you would have to write your own functions to convert from a dao-class object to something that, e.g. jonathan or jsown could use.
+
+Handling joins in a dao-class are more complicated - the Postmodern dao-class is intended to be simple, not recreate Hibernate or SQLAlchemy. You can see an example at [[https://marijnhaverbeke.nl/postmodern/dao-classes.html#multi-table-dao-class-object]].
diff --git a/doc/postmodern.html b/doc/postmodern.html
index f49f51b..92e9cae 100644
--- a/doc/postmodern.html
+++ b/doc/postmodern.html
@@ -1,7 +1,7 @@
-
+
Postmodern Reference Manual
@@ -246,7 +246,7 @@
Return a single string where the row returned is a json object expressed as a string
@@ -998,9 +1000,9 @@
macro query (query &rest args/format)
Some Examples:
-
-
Default
-
+
+
Default
+
The default is :lists
@@ -1011,9 +1013,9 @@
Default
-
-
Single
-
+
+
Single
+
Returns a single field. Will throw an error if the queries returns more than one field or more than one row
@@ -1024,9 +1026,9 @@
Single
-
-
List
-
+
+
List
+
Returns a list containing the selected fields. Will throw an error if the query returns more than one row
@@ -1037,9 +1039,9 @@
List
-
-
Lists
-
+
+
Lists
+
This is the default
@@ -1050,9 +1052,9 @@
Lists
-
-
Alist
-
+
+
Alist
+
Returns an alist containing the field name as a keyword and the selected fields. Will throw an error if the query returns more than one row.
@@ -1063,9 +1065,9 @@
Alist
-
-
Str-alist
-
+
+
Str-alist
+
Returns an alist containing the field name as a lower case string and the selected fields. Will throw an error if the query returns more than one row.
@@ -1076,9 +1078,9 @@
Str-alist
-
-
Alists
-
+
+
Alists
+
Returns a list of alists containing the field name as a keyword and the selected fields.
@@ -1090,9 +1092,9 @@
Alists
-
-
Str-alists
-
+
+
Str-alists
+
Returns a list of alists containing the field name as a lower case string and the selected fields.
@@ -1104,9 +1106,9 @@
Str-alists
-
-
Plist
-
+
+
Plist
+
Returns a plist containing the field name as a keyword and the selected fields. Will throw an error if the query returns more than one row.
@@ -1117,9 +1119,9 @@
Plist
-
-
Plists
-
+
+
Plists
+
Returns a list of plists containing the field name as a keyword and the selected fields.
@@ -1130,9 +1132,9 @@
Plists
-
-
Vectors
-
+
+
Vectors
+
Returns a vector of vectors where each internal vector is a returned row from the query. The field names are not included. NOTE: It will return an empty vector instead of NIL if there is no result.
@@ -1150,9 +1152,9 @@
Vectors
-
-
Array-hash
-
+
+
Array-hash
+
Returns a vector of hashtables where each hash table is a returned row from the query with field name as the key expressed as a lower case string.
@@ -1170,9 +1172,9 @@
Array-hash
-
-
Dao
-
+
+
Dao
+
Returns a list of daos of the type specified
@@ -1186,9 +1188,9 @@
Dao
-
-
Column
-
+
+
Column
+
Returns a list of field values of a single field. Will throw an error if more than one field is selected
@@ -1202,9 +1204,9 @@
Column
-
-
Json-strs
-
+
+
Json-strs
+
Return a list of strings where the row returned is a json object expressed as a string
@@ -1242,9 +1244,9 @@
Json-strs
-
-
Json-str
-
+
+
Json-str
+
Return a single string where the row returned is a json object expressed as a string
@@ -1259,9 +1261,9 @@
Json-str
-
-
Json-array-str
-
+
+
Json-array-str
+
Return a string containing a json array, each element in the array is a selected row expressed as a json object. NOTE: If there is no result, this will return a string with an empty json array.
@@ -1278,9 +1280,9 @@
Json-array-str
-
-
Second value returned
-
+
+
Second value returned
+
If the database returns information about the amount rows that were affected,
such as with updating or deleting queries, this is returned as a second value.
@@ -2330,9 +2332,9 @@
function add-comment (type name comment &optio
-
-
find-comments (type identifier)
-
+
+
find-comments (type identifier)
+
Returns the comments attached to a particular database object. The allowed
types are :database :schema :table :columns (all the columns in a table)
@@ -2566,6 +2568,24 @@
function list-installed-extensions (
+
+
+
function load-uuid-extension ()
+
+
+Loads the Postgresql uuid-ossp contrib module. Once loaded, you can call uuid
+generation functions such as uuid_generate_v4 within a query. E.g.
+