Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FMWK-312 Cleanup for configuration and documentation #691

Merged
merged 2 commits into from
Jan 23, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion src/main/asciidoc/index.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
:toclevels: 1
:spring-data-commons-docs-online: https://docs.spring.io/spring-data/commons/docs/current/reference/html

(C) 2018-2023 The original authors.
(C) 2018-2024 The original authors.

NOTE: Copies of this document may be made for your own use and for distribution to others, provided that you do not charge any fee for such copies and further provided that each copy contains this Copyright Notice, whether distributed in print or electronically.

Expand Down
29 changes: 9 additions & 20 deletions src/main/asciidoc/reference/aerospike-object-mapping.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,43 +13,32 @@ For more details refer to SpringData documentation:

`AerospikeMappingConverter` has a few conventions for mapping objects to documents when no additional mapping metadata is provided. The conventions are:


[[mapping-conventions-id-field]]
=== How the '_id' Field Is Handled in the Mapping Layer

AerospikeDB requires that you have an '_id' field for all objects. If you don't provide one the driver will assign a ObjectId with a generated value. The "_id" field can be of any type, other than arrays, so long as it is unique. The driver naturally supports all primitive types and Dates. When using the `AerospikeMappingConverter` there are certain rules that govern how properties from the Java class are mapped to this '_id' field.
=== How the 'id' Field Is Handled in the Mapping Layer

The following outlines what field will be mapped to the '_id' document field:
Aerospike DB requires that you have an `id` field for all objects. The `id` field can be of any primitive type as well as `String` or `byte[]`.

* A field annotated with `@Id` (`org.springframework.data.annotation.Id`) will be mapped to the '_id' field.
* A field without an annotation but named 'id' will be mapped to the '_id' field.
* The default field name for identifiers is '_id' and can be customized via the `@Field` annotation.
The following table outlines the requirements for an `id` field:

[cols="1,2", options="header"]
.Examples for the translation of '_id'-field definitions
|===
| Field definition
| Resulting Id-Fieldname in AerospikeDB
| Description

| `String` id
| `_id`
| A field named 'id' without an annotation

| `@Field` `String` id
| `_id`

| `@Field('x')` `String` id
| `x`
| A field annotated with `@Id` (`org.springframework.data.annotation.Id`)

| `@Id` `String` x
| `_id`
| `@Id` `String` customNamedIdField

| `@Field('x')` `@Id` `String` x
| `_id`
|===

The following outlines what type of conversion, if any, will be done on the property mapped to the _id document field.
The following outlines what type of conversion, if any, will be done on the property mapped to the `id` document field:

* By default, the type of the field annotated with `@id` is turned into a `String` to be stored in Aerospike database. If the original type cannot be persisted (see xref:#configuration.keep-original-key-types[keepOriginalKeyTypes] for details), it must be convertible to `String` and will be stored in the database as such, then converted back to the original type when the object is read. This is transparent to the application but needs to be considered if using external tools like `AQL` to view the data.
* By default, the type of the `id` field is turned into a `String` to be stored in Aerospike database. If xref:#configuration.keep-original-key-types[keepOriginalKeyTypes] parameter is set to `true`, IDs of type `long` (`int` will also be stored as `long`) and `byte[]` will be persisted as is. If the original type cannot be persisted, it must be convertible to `String` and will be stored in the database as such, then converted back to the original type when the object is read. This is transparent to the application but needs to be considered if using external tools like `AQL` to view the data.
* If no field named "id" is present in the Java class then an implicit '_id' file will be generated by the driver but not mapped to a property or field of the Java class.

When querying and updating `AerospikeTemplate` will use the converter to handle conversions of the `Query` and `Update` objects that correspond to the above rules for saving documents so field names and types used in your queries will be able to match what is in your domain classes.
Expand Down
2 changes: 1 addition & 1 deletion src/main/asciidoc/reference/installation-and-usage.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -178,6 +178,6 @@ aql> select * from test.Person where pk = "1"

NOTE: The fully qualified path of the class is listed in each record. This is needed to instantiate the class correctly, especially in cases when the compile-time type and runtime type of the object differ. For example, where a field is declared as a super class but the instantiated class is a subclass.

NOTE: By default, the type of the field annotated with `@id` is turned into a `String` to be stored in Aerospike database. If the original type cannot be persisted (see xref:#configuration.keep-original-key-types[keepOriginalKeyTypes] for details), it must be convertible to `String` and will be stored in the database as such, then converted back to the original type when the object is read. This is transparent to the application but needs to be considered if using external tools like `AQL` to view the data.
NOTE: By default, the type of the `id` field is turned into a `String` to be stored in Aerospike database. If xref:#configuration.keep-original-key-types[keepOriginalKeyTypes] parameter is set to `true`, IDs of type `long` (`int` will also be stored as `long`) and `byte[]` will be persisted as is. If the original type cannot be persisted, it must be convertible to `String` and will be stored in the database as such, then converted back to the original type when the object is read. This is transparent to the application but needs to be considered if using external tools like `AQL` to view the data.


Original file line number Diff line number Diff line change
Expand Up @@ -214,9 +214,13 @@ protected ClientPolicy getClientPolicy() {
ClientPolicy clientPolicy = new ClientPolicy();
clientPolicy.failIfNotConnected = true;
clientPolicy.timeout = 10_000;
clientPolicy.readPolicyDefault.sendKey = true;
clientPolicy.writePolicyDefault.sendKey = true;
clientPolicy.batchPolicyDefault.sendKey = true;
boolean sendKey = true;
clientPolicy.readPolicyDefault.sendKey = sendKey;
clientPolicy.writePolicyDefault.sendKey = sendKey;
clientPolicy.batchPolicyDefault.sendKey = sendKey;
clientPolicy.batchWritePolicyDefault.sendKey = sendKey;
clientPolicy.queryPolicyDefault.sendKey = sendKey;
clientPolicy.scanPolicyDefault.sendKey = sendKey;
return clientPolicy;
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -533,7 +533,7 @@ protected boolean batchRecordFailed(BatchRecord batchRecord) {
}

protected boolean batchWriteSupported() {
return serverVersionSupport.batchWrite();
return serverVersionSupport.isBatchWriteSupported();
}

protected enum OperationType {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ public void enrichIndexesWithCardinality(IAerospikeClient client, Map<IndexKey,

public int getIndexBinValuesRatio(IAerospikeClient client, ServerVersionSupport serverVersionSupport,
String namespace, String indexName) {
if (serverVersionSupport.sIndexCardinality()) {
if (serverVersionSupport.isSIndexCardinalitySupported()) {
try {
String indexStatData = Info.request(client.getInfoPolicyDefault(), client.getCluster().getRandomNode(),
String.format("sindex-stat:ns=%s;indexname=%s", namespace, indexName));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,17 +57,17 @@ public boolean isDropCreateBehaviorUpdated() {
/**
* Since Aerospike Server ver. 6.3.0.0 find by POJO is supported.
*/
public boolean findByPojo() {
public boolean isFindByPojoSupported() {
return ModuleDescriptor.Version.parse(getServerVersion())
.compareTo(SERVER_VERSION_6_3_0_0) >= 0;
}

public boolean batchWrite() {
public boolean isBatchWriteSupported() {
return ModuleDescriptor.Version.parse(getServerVersion())
.compareTo(SERVER_VERSION_6_0_0_0) >= 0;
}

public boolean sIndexCardinality() {
public boolean isSIndexCardinalitySupported() {
return ModuleDescriptor.Version.parse(getServerVersion())
.compareTo(SERVER_VERSION_6_1_0_0) >= 0;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ public List<Person> saveGeneratedPersons(int count) {

public <T> void deleteAll(ReactiveAerospikeRepository<T, ?> repository, Collection<T> entities) {
// batch write operations are supported starting with Server version 6.0+
if (serverVersionSupport.batchWrite()) {
if (serverVersionSupport.isBatchWriteSupported()) {
try {
repository.deleteAll(entities).block();
} catch (AerospikeException.BatchRecordArray ignored) {
Expand All @@ -97,7 +97,7 @@ public <T> void deleteAll(ReactiveAerospikeRepository<T, ?> repository, Collecti

public <T> void saveAll(ReactiveAerospikeRepository<T, ?> repository, Collection<T> entities) {
// batch write operations are supported starting with Server version 6.0+
if (serverVersionSupport.batchWrite()) {
if (serverVersionSupport.isBatchWriteSupported()) {
repository.saveAll(entities).blockLast();
} else {
entities.forEach(entity -> repository.save(entity).block());
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ public void deleteById_returnsFalseIfValueIsAbsent() {

@Test
public void deleteByGroupedKeys() {
if (serverVersionSupport.batchWrite()) {
if (serverVersionSupport.isBatchWriteSupported()) {
List<Person> persons = additionalAerospikeTestOperations.saveGeneratedPersons(5);
List<String> personsIds = persons.stream().map(Person::getId).toList();
List<Customer> customers = additionalAerospikeTestOperations.saveGeneratedCustomers(3);
Expand Down Expand Up @@ -245,7 +245,7 @@ public void deleteByType_NullTypeThrowsException() {
@Test
public void deleteByIds_rejectsDuplicateIds() {
// batch write operations are supported starting with Server version 6.0+
if (serverVersionSupport.batchWrite()) {
if (serverVersionSupport.isBatchWriteSupported()) {
String id1 = nextId();
DocumentWithExpiration document1 = new DocumentWithExpiration(id1);
DocumentWithExpiration document2 = new DocumentWithExpiration(id1);
Expand All @@ -262,7 +262,7 @@ public void deleteByIds_rejectsDuplicateIds() {
@Test
public void deleteByIds_ShouldDeleteAllDocuments() {
// batch delete operations are supported starting with Server version 6.0+
if (serverVersionSupport.batchWrite()) {
if (serverVersionSupport.isBatchWriteSupported()) {
String id1 = nextId();
String id2 = nextId();
template.save(new DocumentWithExpiration(id1));
Expand All @@ -287,7 +287,7 @@ public void deleteByIds_ShouldDeleteAllDocuments() {
@Test
public void deleteByIds_ShouldDeleteAllDocumentsWithSetName() {
// batch delete operations are supported starting with Server version 6.0+
if (serverVersionSupport.batchWrite()) {
if (serverVersionSupport.isBatchWriteSupported()) {
String id1 = nextId();
String id2 = nextId();
template.save(new DocumentWithExpiration(id1), OVERRIDE_SET_NAME);
Expand All @@ -303,7 +303,7 @@ public void deleteByIds_ShouldDeleteAllDocumentsWithSetName() {
@Test
public void deleteAll_rejectsDuplicateIds() {
// batch write operations are supported starting with Server version 6.0+
if (serverVersionSupport.batchWrite()) {
if (serverVersionSupport.isBatchWriteSupported()) {
String id1 = nextId();
DocumentWithExpiration document1 = new DocumentWithExpiration(id1);
DocumentWithExpiration document2 = new DocumentWithExpiration(id1);
Expand All @@ -319,7 +319,7 @@ public void deleteAll_rejectsDuplicateIds() {
@Test
public void deleteAll_ShouldDeleteAllDocuments() {
// batch delete operations are supported starting with Server version 6.0+
if (serverVersionSupport.batchWrite()) {
if (serverVersionSupport.isBatchWriteSupported()) {
String id1 = nextId();
String id2 = nextId();
DocumentWithExpiration document1 = new DocumentWithExpiration(id1);
Expand All @@ -345,7 +345,7 @@ public void deleteAll_ShouldDeleteAllDocuments() {
@Test
public void deleteAll_ShouldDeleteAllDocumentsWithSetName() {
// batch delete operations are supported starting with Server version 6.0+
if (serverVersionSupport.batchWrite()) {
if (serverVersionSupport.isBatchWriteSupported()) {
String id1 = nextId();
String id2 = nextId();
DocumentWithExpiration document1 = new DocumentWithExpiration(id1);
Expand All @@ -361,7 +361,7 @@ public void deleteAll_ShouldDeleteAllDocumentsWithSetName() {
@Test
public void deleteAll_ShouldDeleteAllDocumentsBeforeGivenLastUpdateTime() {
// batch delete operations are supported starting with Server version 6.0+
if (serverVersionSupport.batchWrite()) {
if (serverVersionSupport.isBatchWriteSupported()) {
String id1 = nextId();
String id2 = nextId();
CollectionOfObjects document1 = new CollectionOfObjects(id1, List.of("test1"));
Expand Down Expand Up @@ -413,7 +413,7 @@ public void deleteAll_ShouldDeleteAllDocumentsBeforeGivenLastUpdateTime() {
@Test
public void deleteAll_VersionsMismatch() {
// batch delete operations are supported starting with Server version 6.0+
if (serverVersionSupport.batchWrite()) {
if (serverVersionSupport.isBatchWriteSupported()) {
String id1 = "id1";
VersionedClass document1 = new VersionedClass(id1, "test1");
String id2 = "id2";
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ public void beforeAllSetUp() {
deleteOneByOne(allPersons, OVERRIDE_SET_NAME);

// batch write operations are supported starting with Server version 6.0+
if (serverVersionSupport.batchWrite()) {
if (serverVersionSupport.isBatchWriteSupported()) {
template.insertAll(allPersons);
template.insertAll(allPersons, OVERRIDE_SET_NAME);
} else {
Expand Down Expand Up @@ -83,7 +83,7 @@ public void setUp() {
super.setUp();
template.deleteAll(Person.class);
template.deleteAll(OVERRIDE_SET_NAME);
if (serverVersionSupport.batchWrite()) {
if (serverVersionSupport.isBatchWriteSupported()) {
template.insertAll(allPersons);
template.insertAll(allPersons, OVERRIDE_SET_NAME);
} else {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ public void beforeAllSetUp() {
deleteOneByOne(allPersons);

// batch write operations are supported starting with Server version 6.0+
if (serverVersionSupport.batchWrite()) {
if (serverVersionSupport.isBatchWriteSupported()) {
template.insertAll(allPersons);
template.insertAll(allPersons, OVERRIDE_SET_NAME);
} else {
Expand Down Expand Up @@ -295,7 +295,7 @@ public void findAll_findNothing() {
assertThat(result).isEmpty();

// batch write operations are supported starting with Server version 6.0+
if (serverVersionSupport.batchWrite()) {
if (serverVersionSupport.isBatchWriteSupported()) {
template.insertAll(allPersons);
} else {
allPersons.forEach(person -> template.insert(person));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,7 @@ public void insertsOnlyFirstDocumentAndNextAttemptsShouldFailWithDuplicateKeyExc
@Test
public void insertAll_insertsAllDocuments() {
// batch write operations are supported starting with Server version 6.0+
if (serverVersionSupport.batchWrite()) {
if (serverVersionSupport.isBatchWriteSupported()) {
List<Person> persons = IntStream.range(1, 10)
.mapToObj(age -> Person.builder().id(nextId())
.firstName("Gregor")
Expand Down Expand Up @@ -239,7 +239,7 @@ public void insertAllWithSetName_insertsAllDocuments() {
.collect(Collectors.toList());

// batch write operations are supported starting with Server version 6.0+
if (serverVersionSupport.batchWrite()) {
if (serverVersionSupport.isBatchWriteSupported()) {
template.insertAll(persons, OVERRIDE_SET_NAME);
} else {
persons.forEach(person -> template.insert(person, OVERRIDE_SET_NAME));
Expand All @@ -254,7 +254,7 @@ public void insertAllWithSetName_insertsAllDocuments() {
@Test
public void insertAll_rejectsDuplicateIds() {
// batch write operations are supported starting with Server version 6.0+
if (serverVersionSupport.batchWrite()) {
if (serverVersionSupport.isBatchWriteSupported()) {
VersionedClass second = new VersionedClass("as-5440", "foo");
assertThatThrownBy(() -> template.insertAll(List.of(second, second)))
.isInstanceOf(OptimisticLockingFailureException.class)
Expand All @@ -266,7 +266,7 @@ public void insertAll_rejectsDuplicateIds() {
@Test
public void shouldInsertAllVersionedDocuments() {
// batch write operations are supported starting with Server version 6.0+
if (serverVersionSupport.batchWrite()) {
if (serverVersionSupport.isBatchWriteSupported()) {
VersionedClass first = new VersionedClass(id, "foo");
VersionedClass second = new VersionedClass(nextId(), "foo", 1L);
VersionedClass third = new VersionedClass(nextId(), "foo", 2L);
Expand Down
Loading
Loading