-
Notifications
You must be signed in to change notification settings - Fork 4
PostgreSQL
PostgreSQL is a popular open-source relational database management system used for a variety of purposes, including:
-
Web applications: PostgreSQL is commonly used for web applications that require a high degree of reliability, scalability, and performance. It supports a wide range of data types, including structured, semi-structured, and unstructured data, making it suitable for a wide range of applications.
-
Geospatial applications: PostgreSQL provides robust support for geospatial data and can store, manage, and analyze spatial data such as maps, geolocation data, and other location-based data.
-
Data warehousing: PostgreSQL is well-suited for data warehousing, which involves storing large volumes of data from multiple sources and then analyzing that data to extract insights and trends. PostgreSQL's support for complex data types and its ability to handle large data sets make it ideal for this purpose.
-
Scientific applications: PostgreSQL is used in many scientific applications for storing and analyzing data, including astronomy, genetics, and physics. Its support for complex data types and arrays make it well-suited for scientific data.
-
Financial applications: PostgreSQL is commonly used in financial applications such as trading systems, risk management, and fraud detection. Its reliability, scalability, and transactional integrity make it an ideal choice for these applications.
Overall, PostgreSQL is a versatile and powerful database system that can be used for a wide range of applications. Its flexibility, reliability, and scalability make it a popular choice for developers and businesses alike.
PostgreSQL is a popular open-source relational database management system (RDBMS). It was originally developed at the University of California, Berkeley, in the 1980s and has since become one of the most advanced and powerful database management systems available.
Some of the key features of PostgreSQL include:
- Support for a wide range of SQL standards and features
- Built-in support for JSON and other non-relational data types
- Advanced concurrency control and locking mechanisms
- Support for full-text search and advanced indexing options
- Extensibility through the use of custom functions, data types, and procedural languages such as PL/SQL and Python
- High availability and fault tolerance features such as replication, point-in-time recovery, and online backups
- Strong security features including encryption, authentication, and access control mechanisms.
Overall, PostgreSQL is known for its performance, reliability, and flexibility, and is widely used in a variety of applications and industries.
PostgreSQL supports a wide range of data types, including:
- Numeric:
integer
,bigint
,numeric
,real
, anddouble precision
- Monetary:
money
- Character:
character varying
,character
,text
- Binary:
bytea
- Date and Time:
date
,time
,timestamp
,interval
- Boolean:
boolean
- Enumerated Types:
enum
- Geometric:
point
,line
,lseg
,box
,path
,polygon
,circle
- Network Address:
inet
,cidr
,macaddr
- Bit String:
bit
,bit varying
- Text Search:
tsvector
,tsquery
- UUID:
uuid
- XML:
xml
- JSON:
json
,jsonb
- Arrays:
array
- Range Types:
int4range
,int8range
,numrange
,tsrange
,tstzrange
,daterange
- Composite Types:
composite
- User-Defined Types:
domain
PostgreSQL also allows creating user-defined data types by combining built-in data types.
To create a new database in PostgreSQL, you can follow these steps:
-
Open a command-line interface to access the PostgreSQL server.
-
Use the
createdb
command to create a new database, followed by the desired name of the database. For example, to create a database called "mydatabase", run the following command:createdb mydatabase
-
Optionally, you can specify additional options for the new database, such as the character encoding or the owner of the database. For example, to create a database called "mydatabase" with UTF-8 encoding and owned by user "myuser", run the following command:
createdb --encoding=UTF-8 --owner=myuser mydatabase
-
Once the database is created, you can connect to it using a PostgreSQL client or command-line interface and start working with it. For example, to connect to the "mydatabase" database using the
psql
command-line interface, run the following command:psql mydatabase
This will open a new session to the "mydatabase" database, where you can run SQL commands and queries.
To create a new table in PostgreSQL, you can use the CREATE TABLE
statement followed by the table name and the column definitions. Here's a basic example:
CREATE TABLE mytable (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
age INT,
email TEXT UNIQUE
);
In this example, we're creating a table called mytable
with four columns: id
, name
, age
, and email
. The id
column is an auto-incrementing SERIAL
column that will serve as the primary key for the table. The name
column is a required TEXT
field, while the age
column is an optional INT
field. Finally, the email
column is a unique TEXT
field, meaning that each value in the column must be unique.
You can customize the column definitions based on your specific needs, and you can also add additional clauses to the CREATE TABLE
statement to define constraints, indexes, and other table properties.
PostgreSQL supports several types of indexes to improve the performance of queries. Here are the commonly used types of indexes in PostgreSQL:
-
B-tree index: This is the default index type in PostgreSQL and is suitable for most cases. It is used for columns that have a low number of distinct values and for range queries.
-
Hash index: This index type is suitable for columns that have a high number of distinct values, but it is not suitable for range queries.
-
GiST index: This index type is used for complex data types like geometric and full-text data. It supports advanced search and indexing techniques.
-
GIN index: This index type is used for columns that contain arrays or other composite types. It supports advanced indexing and search techniques.
-
SP-GiST index: This index type is suitable for special data types like space partitions and quad trees. It supports advanced search and indexing techniques.
-
BRIN index: This index type is used for very large tables that are partitioned. It is optimized for read-heavy workloads.
-
RUM index: This index type is used for full-text search and supports advanced indexing techniques.
Each index type has its own advantages and disadvantages, and choosing the right index type depends on the data type and the type of queries that will be executed on the table.
Indexes in PostgreSQL offer the following advantages:
-
Improved Performance: Indexes can significantly improve the performance of SELECT queries by reducing the amount of data PostgreSQL has to scan to find the requested rows.
-
Fast Sorting: Indexes can be used to sort data quickly, which can be useful in situations where you need to return a large number of sorted rows.
-
Constraints: Indexes can be used to enforce unique and primary key constraints, ensuring the integrity of the data in the table.
-
Full-Text Search: Indexes can be used to perform full-text search, which enables you to search for words or phrases within a text column.
-
Reduced Disk Access: Indexes reduce the amount of disk access required to find data, which can improve the overall performance of your database.
-
Reduced Locking: Indexes can reduce the amount of locking required when updating or inserting data, which can improve the concurrency of your database.
Optimizing queries in PostgreSQL is an essential task for improving database performance. Here are some techniques to optimize queries:
-
Use indexes: Indexes can significantly improve the performance of queries by reducing the number of rows that must be scanned. Properly indexing the tables can make the queries run faster.
-
Use EXPLAIN: The EXPLAIN command shows the execution plan of a query, which helps to identify any performance issues. It allows you to see how PostgreSQL will execute the query and optimize it accordingly.
-
Use JOINs efficiently: JOINs are essential for retrieving data from multiple tables, but if they are not optimized correctly, they can slow down the query performance. Therefore, it's important to use JOINs efficiently by choosing the right type of JOIN, creating indexes on the joining columns, and selecting the appropriate order of JOINs.
-
Avoid subqueries: Subqueries can be a powerful tool for data analysis, but they can also be slow and inefficient. In many cases, it is possible to rewrite the query using a JOIN, which can result in faster performance.
-
Use LIMIT and OFFSET: LIMIT and OFFSET clauses can be used to restrict the number of rows returned by a query. Using LIMIT and OFFSET can significantly reduce the amount of data that needs to be processed by the database.
-
Use efficient data types: PostgreSQL provides several data types, and using the appropriate data types for your data can improve query performance. For example, using the INT data type instead of BIGINT for smaller numbers can reduce storage and processing requirements.
-
Analyze and vacuum tables: Analyzing and vacuuming tables can help improve query performance by updating statistics and removing dead rows from the table.
By using these techniques, you can optimize queries in PostgreSQL and improve the performance of your database.
In PostgreSQL, a trigger is a special type of stored procedure that is automatically executed in response to certain events occurring in the database. These events can include things like insertions, updates, and deletions of data in a table.
To create a trigger in PostgreSQL, you can use the CREATE TRIGGER
statement, which has the following syntax:
CREATE TRIGGER trigger_name
{BEFORE | AFTER} {INSERT | UPDATE | DELETE}
ON table_name
[FOR EACH ROW]
EXECUTE FUNCTION trigger_function_name();
Here, trigger_name
is the name you want to give to your trigger, BEFORE
or AFTER
specifies when the trigger should be executed (before or after the event), INSERT
, UPDATE
, or DELETE
specifies the event that triggers the trigger, table_name
is the name of the table on which the trigger will be created, and trigger_function_name
is the name of the function that will be executed when the trigger is fired.
You can also add the FOR EACH ROW
clause to specify that the trigger should be executed once for each row affected by the event.
Here's an example of how you can create a simple trigger in PostgreSQL:
CREATE TRIGGER update_timestamp
BEFORE INSERT OR UPDATE
ON my_table
FOR EACH ROW
EXECUTE FUNCTION update_timestamp_function();
This trigger will be executed before any INSERT
or UPDATE
operations on the my_table
table, and will call the update_timestamp_function()
function.
In PostgreSQL, a transaction is a group of SQL statements that are executed as a single unit of work. The changes made by the statements within a transaction can either be committed to the database or rolled back, effectively undoing any changes made within the transaction.
To start a transaction in PostgreSQL, you can use the BEGIN
statement. For example:
BEGIN;
Once the BEGIN
statement is executed, all subsequent statements within the transaction will be considered part of the same transaction until it is either committed or rolled back.
To commit a transaction and save the changes made by its statements, you can use the COMMIT
statement. For example:
COMMIT;
Alternatively, if you want to discard the changes made within a transaction, you can use the ROLLBACK
statement. For example:
ROLLBACK;
It's worth noting that PostgreSQL automatically starts a new transaction when a new connection is established to the database. If you want to ensure that a set of statements are executed as a single transaction, you will need to use the BEGIN
, COMMIT
, and ROLLBACK
statements explicitly.
PostgreSQL provides different levels of transaction isolation to ensure that concurrent transactions do not affect each other and maintain the consistency of the database. The different levels of transaction isolation in PostgreSQL are:
-
Read Uncommitted: This isolation level allows transactions to read uncommitted data that is not yet committed by other transactions. This can result in dirty reads, non-repeatable reads, and phantom reads.
-
Read Committed: This isolation level allows transactions to read only committed data. This ensures that transactions do not read uncommitted data from other transactions. However, it can still result in non-repeatable reads and phantom reads.
-
Repeatable Read: This isolation level ensures that transactions can read only committed data, and once a transaction reads a row, it maintains a read lock on that row until the transaction completes. This prevents non-repeatable reads but can still result in phantom reads.
-
Serializable: This isolation level provides the highest level of transaction isolation. It ensures that transactions can read only committed data and maintain a read lock on all rows that are scanned by a query. This ensures that no other transaction can modify the data until the current transaction completes, thereby preventing non-repeatable reads and phantom reads.
To set the isolation level, you can use the SET TRANSACTION ISOLATION LEVEL command followed by the desired level. For example, to set the transaction isolation level to Repeatable Read, you can use the following command:
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
To start a transaction, you can use the BEGIN command, and to commit a transaction, you can use the COMMIT command. For example:
BEGIN;
-- perform some database operations
COMMIT;
To backup and restore a PostgreSQL database, you can use the pg_dump
and pg_restore
utilities that come with PostgreSQL.
To backup a PostgreSQL database using pg_dump
, follow these steps:
- Open a terminal or command prompt and enter the following command to create a backup of the database:
pg_dump dbname > backup_file.sql
Replace dbname
with the name of the database you want to back up, and backup_file.sql
with the name you want to give to the backup file.
- If you want to back up all databases, you can use the following command instead:
pg_dumpall > backup_file.sql
This will create a backup of all databases on the server.
To restore a PostgreSQL database using pg_restore
, follow these steps:
- Open a terminal or command prompt and enter the following command to restore the database from the backup file:
pg_restore -d dbname backup_file.sql
Replace dbname
with the name of the database you want to restore, and backup_file.sql
with the name of the backup file you want to use.
- If you want to restore all databases, you can use the following command instead:
pg_restore -C -d postgres backup_file.sql
This will create a new database with the name postgres
and restore all databases from the backup file into it.
Note that you may need to provide additional options depending on the specifics of your backup and restore process, such as the user to connect as or the host to connect to. Refer to the PostgreSQL documentation for more information on these options.
Replication in PostgreSQL is the process of copying data from one PostgreSQL database server to another in a continuous and synchronized manner. This allows for increased data availability, improved scalability, and better disaster recovery.
There are different types of replication methods in PostgreSQL, including:
-
Physical Replication: This involves creating a replica of the entire PostgreSQL database cluster, including all databases, tables, and indexes. Physical replication can be synchronous or asynchronous, and it uses the streaming replication feature of PostgreSQL.
-
Logical Replication: This involves replicating only specific tables or subsets of data from one PostgreSQL database to another. Logical replication can be used for tasks like data warehousing, real-time analytics, and data distribution.
To set up replication in PostgreSQL, you need to configure both the primary (source) and standby (destination) servers. This involves configuring the postgresql.conf and pg_hba.conf files, creating replication slots, and setting up the replication user and password.
Once replication is set up, you can monitor it using tools like pg_stat_replication, pg_replication_slots, and pg_stat_activity. If replication fails, you can troubleshoot it using the PostgreSQL logs, replication status, and replication slots.
There are several types of replication available in PostgreSQL:
-
Physical Replication: In this type of replication, the entire database cluster is replicated to a standby server. The replication is done by copying the data files from the primary server to the standby server using streaming replication or file-based replication.
-
Logical Replication: This type of replication allows replication of a subset of the database objects. In this method, changes are tracked at the row level and sent to the replica server in a specific format. It allows more flexibility in selecting the objects to be replicated and can be used in cases where the database is large.
-
Streaming Replication: This is a feature of physical replication, where the changes in the primary server are streamed to the standby server in real-time. It is a synchronous replication method where the standby server is always kept up to date with the primary server.
-
Hot Standby: Hot standby is a feature of physical replication where a standby server is available for read-only queries while the primary server is still running. This allows for load balancing and scaling out of read-only queries.
-
Asynchronous Replication: In this type of replication, changes made to the primary server are not immediately replicated to the standby server. Instead, they are queued and replicated later. It provides higher performance as it allows the primary server to continue processing requests without waiting for the replication process.
-
Synchronous Replication: This is a feature of physical replication where the primary server waits for confirmation from the standby server that the data has been successfully replicated before committing the transaction. It provides data consistency but may affect the performance of the primary server.
Streaming replication is a feature of PostgreSQL that enables continuous, asynchronous replication of a primary database to one or more standby databases. Setting up and configuring streaming replication in PostgreSQL involves the following steps:
-
Set up a primary server: The primary server is the server that hosts the original database that needs to be replicated. You can set up a primary server by installing PostgreSQL and creating a new database cluster.
-
Set up a standby server: The standby server is the server that will replicate the data from the primary server. You can set up a standby server by installing PostgreSQL and creating a new database cluster.
-
Configure the primary server for streaming replication: To configure the primary server for streaming replication, you need to modify the PostgreSQL configuration file (postgresql.conf) to enable the wal_level parameter to at least "replica", and set up a replication user with the required privileges.
-
Create a base backup of the primary server: A base backup is a copy of the primary server's data directory. You can create a base backup by using the pg_basebackup utility.
-
Copy the base backup to the standby server: After creating the base backup, copy it to the standby server.
-
Configure the standby server for streaming replication: To configure the standby server for streaming replication, you need to modify the PostgreSQL configuration file (postgresql.conf) to enable the hot_standby parameter, and set up a recovery.conf file to specify the primary server's location and other parameters.
-
Start the primary and standby servers: Start the primary and standby servers and monitor the replication process.
By following these steps, you can set up and configure streaming replication in PostgreSQL.
In PostgreSQL, a hot standby is a replica server that is kept in a state of continuous recovery and can be used for failover purposes in case the primary server fails. The hot standby is continuously applying changes from the primary server's write-ahead log (WAL), so that it is kept up to date with the primary server's transactions. This allows it to take over as the new primary server with minimal disruption in case of a primary server failure.
To set up a hot standby in PostgreSQL, you first need to configure streaming replication between the primary server and the standby server. This involves setting up the standby server as a replica of the primary server and configuring it to continuously receive and apply changes from the primary server's WAL.
Once streaming replication is set up, you can start the standby server in hot standby mode by setting the hot_standby
parameter to on
in its configuration file. This allows the standby server to accept read-only queries and continuously apply changes from the primary server's WAL. If the primary server fails, the standby server can be promoted to the new primary server by setting the recovery_target_timeline
parameter to the timeline of the failed primary server, and starting the server with the pg_ctl promote
command.
pg_dump
is a PostgreSQL utility that enables you to create backups of a PostgreSQL database. The pg_dump
utility generates a script of SQL commands that can be used to recreate the database at a later time.
The pg_dump
command can be run from the command line, and its basic syntax is as follows:
pg_dump [options] database_name > backup_file.sql
Here, database_name
is the name of the database that you want to backup, and backup_file.sql
is the name of the file that you want to save the backup to.
Some common options that can be used with pg_dump
include:
-
-U
: specifies the username to connect to the database as -
-h
: specifies the host name or IP address of the database server -
-p
: specifies the port number to connect to the database on -
-F
: specifies the format of the output file (e.g. plain text or binary) -
-b
: includes a backup of the database's schema in the backup file -
-t
: specifies which tables to include in the backup
To restore a backup created with pg_dump
, you can use the psql
utility, like this:
psql -d database_name < backup_file.sql
Here, database_name
is the name of the database that you want to restore the backup to, and backup_file.sql
is the name of the backup file that you want to restore from.
Setting up PostgreSQL for high availability requires implementing various techniques and tools to ensure that the database can continue to function even when a component of the system fails. Some of the techniques and tools for achieving high availability in PostgreSQL include:
-
Replication: Replication involves creating copies of the database and distributing them to different servers. This ensures that if one server fails, another server can take over, and users can continue accessing the database without interruption. PostgreSQL supports both asynchronous and synchronous replication.
-
Load balancing: Load balancing involves distributing database requests across multiple servers to prevent any one server from becoming overwhelmed and crashing. PostgreSQL can be configured to work with load balancers like HAProxy, Pgpool-II, and pgbouncer.
-
Automatic failover: Automatic failover allows for automatic switching to a standby server in the event of a failure in the primary server. This is achieved using tools like Repmgr, Pg_auto_failover, and Patroni.
-
Monitoring and alerting: It is important to monitor the status of the PostgreSQL server and set up alerts to notify administrators of any issues that arise. Tools like Nagios, Zabbix, and Icinga can be used for monitoring and alerting.
-
Backups: Regular backups are essential for disaster recovery in case of a catastrophic failure. PostgreSQL provides the pg_dump tool for creating backups, and other tools like Barman and PgBackRest can be used for more advanced backup and restore operations.
Overall, setting up PostgreSQL for high availability involves configuring replication, load balancing, automatic failover, monitoring, alerting, and backups, to ensure that the database can continue to function even when a component of the system fails.
PostgreSQL supports several authentication methods, including:
- Trust: allows any user to connect without a password.
- Password: requires users to provide a password to connect.
- LDAP: uses a Lightweight Directory Access Protocol (LDAP) server to authenticate users.
- PAM: uses Pluggable Authentication Modules (PAM) to authenticate users.
- Certificate: requires a valid SSL/TLS certificate to connect.
- GSSAPI: uses the Generic Security Services Application Program Interface (GSSAPI) to authenticate users.
In addition to these built-in authentication methods, PostgreSQL can also use external authentication methods through the use of custom authentication plugins.
Securing a PostgreSQL database involves implementing measures to protect the database from unauthorized access, data theft, and other security threats. Here are some best practices to secure a PostgreSQL database:
-
Use strong passwords: Always use strong passwords for all user accounts, and make sure to change them regularly. Avoid using default passwords, and use a combination of upper and lower case letters, numbers, and special characters.
-
Implement role-based access control: Implement role-based access control (RBAC) to ensure that users only have access to the data they need. Grant only the necessary privileges to users to prevent unauthorized access or accidental data loss.
-
Encrypt sensitive data: Encrypt sensitive data, such as passwords and credit card numbers, using industry-standard encryption algorithms like AES (Advanced Encryption Standard).
-
Use SSL/TLS encryption: Use SSL/TLS encryption to secure network communication between clients and the server. This will prevent man-in-the-middle attacks and eavesdropping.
-
Disable unnecessary services: Disable any unnecessary services and protocols, such as Telnet and FTP, to reduce the attack surface of the database.
-
Keep the database software up to date: Regularly update the PostgreSQL database software to the latest version to ensure that known vulnerabilities are patched.
-
Monitor the database: Monitor the database activity and audit logs to detect any suspicious activity or unauthorized access attempts.
-
Use a firewall: Use a firewall to restrict network traffic to only authorized users and IP addresses.
-
Limit access to the server: Limit physical access to the server hosting the database and implement appropriate security measures, such as access controls and CCTV cameras.
-
Regularly backup the database: Regularly backup the database to prevent data loss in case of hardware failure or other disasters.
By following these best practices, you can secure your PostgreSQL database and protect it from various security threats.
There are several ways to monitor the performance of a PostgreSQL database. Here are some methods:
-
PostgreSQL log files: PostgreSQL writes logs to a file that can be used to diagnose performance issues. The log files contain information about slow queries, errors, and other important events.
-
pg_stat_activity view: This view displays information about current connections to the database, including the query being executed, the user running the query, and the amount of time the query has been running.
-
pg_stat_database view: This view displays information about each database, including the number of connections, the amount of disk space used, and the number of transactions.
-
pg_stat_all_tables and pg_statio_all_tables views: These views display information about each table, including the number of times it has been accessed, the number of rows in the table, and the amount of disk space used.
-
EXPLAIN and EXPLAIN ANALYZE commands: These commands can be used to analyze the execution plan of a query and identify performance bottlenecks.
-
Performance monitoring tools: There are several third-party tools that can be used to monitor the performance of a PostgreSQL database, including pgAdmin, Nagios, and Zabbix.
By monitoring the performance of a PostgreSQL database, you can identify and diagnose performance issues, optimize queries and indexes, and improve the overall performance of the database.
PostgreSQL is often considered more popular in Django due to its advanced features, reliability, and ability to handle large amounts of data efficiently. Additionally, PostgreSQL supports advanced data types, indexing options, and query optimization techniques that make it well-suited for web applications. In contrast, MySQL has a simpler data model and is often preferred for smaller projects or those that require more flexible schema designs. However, both databases can be used effectively with Django, and the choice ultimately depends on the specific requirements and preferences of the project.
- Introduction
- Variables
- Data Types
- Numbers
- Casting
- Strings
- Booleans
- Operators
- Lists
- Tuple
- Sets
- Dictionaries
- Conditionals
- Loops
- Functions
- Lambda
- Classes
- Inheritance
- Iterators
- Multi‐Processing
- Multi‐Threading
- I/O Operations
- How can I check all the installed Python versions on Windows?
- Hello, world!
- Python literals
- Arithmetic operators and the hierarchy of priorities
- Variables
- Comments
- The input() function and string operators
Boolean values, conditional execution, loops, lists and list processing, logical and bitwise operations
- Comparison operators and conditional execution
- Loops
- [Logic and bit operations in Python]
- [Lists]
- [Sorting simple lists]
- [List processing]
- [Multidimensional arrays]
- Introduction
- Sorting Algorithms
- Search Algorithms
- Pattern-matching Algorithm
- Graph Algorithms
- Machine Learning Algorithms
- Encryption Algorithms
- Compression Algorithms
- Start a New Django Project
- Migration
- Start Server
- Requirements
- Other Commands
- Project Config
- Create Data Model
- Admin Panel
- Routing
- Views (Function Based)
- Views (Class Based)
- Django Template
- Model Managers and Querysets
- Form
- User model
- Authentification
- Send Email
- Flash messages
- Seed
- Organize Logic
- Django's Business Logic Services and Managers
- TestCase
- ASGI and WSGI
- Celery Framework
- Redis and Django
- Django Local Network Access
- Introduction
- API development
- API architecture
- lifecycle of APIs
- API Designing
- Implementing APIs
- Defining the API specification
- API Testing Tools
- API documentation
- API version
- REST APIs
- REST API URI naming rules
- Automated vs. Manual Testing
- Unit Tests vs. Integration Tests
- Choosing a Test Runner
- Writing Your First Test
- Executing Your First Test
- Testing for Django
- More Advanced Testing Scenarios
- Automating the Execution of Your Tests
- End-to-end
- Scenario
- Python Syntax
- Python OOP
- Python Developer position
- Python backend developer
- Clean Code
- Data Structures
- Algorithms
- Database
- PostgreSQL
- Redis
- Celery
- RabbitMQ
- Unit testing
- Web API
- REST API
- API documentation
- Django
- Django Advance
- Django ORM
- Django Models
- Django Views
- Django Rest Framework
- Django Rest Framework serializers
- Django Rest Framework views
- Django Rest Framework viewsets