Useful SQL Queries

From truxwiki.com
Jump to navigation Jump to search

The following SQL queries were found to be useful when developing/debugging Truxton.

Contents

Database Server

This section deals with tasks required at the server level.

Number of Connections

SELECT COUNT(DISTINCT(numbackends)) FROM pg_stat_database

Maximum Number of Connections

Run this query as an administrator.

SHOW max_connections

You can change this by:

ALTER SYSTEM SET max_connections = '500';

You must restart the Postgres Service in order for the change to take affect.

All Connection Information

This query will show you the number of active connections, maximum number of connections (server configuration) and the number of connections reserved for the super user.

SELECT * FROM
(SELECT COUNT(*) "Current Number of Connections" FROM pg_stat_activity) q1,
(SELECT setting::int "Reserved for Superuser" FROM pg_settings WHERE name=$$superuser_reserved_connections$$) q2,
(SELECT setting::int "Maximum Number of Connections" FROM pg_settings WHERE name=$$max_connections$$) q3;

Current Connections

This query will show you the current connections.

SELECT datname,
       usename,
       application_name,
       client_addr
FROM pg_stat_activity

Current Activity

Ever wonder what the server is doing? This will show you what each connection is doing.

SELECT state_change,
       age(localtimestamp, state_change),
       datname,
       client_addr,
       application_name,
       state,
       query
FROM pg_stat_activity 
ORDER BY state,
         age DESC,
         datname ASC

Terminate Connection

PostgreSQL 12 will sometimes leave zombie connections open for days. Here's how to kill them. Each connection is a different process on the server.

Kill Connection By Process ID

This will kill a connection to the database given the process id of the offending postgres.exe process.

SELECT pg_terminate_backend(pid) 
FROM pg_stat_activity
WHERE pid = '29564'

Kill Connection By Application Name

Truxton processes will identify themselves via the application name, making it easier to kill them.

SELECT pg_terminate_backend(pid) 
FROM pg_stat_activity
WHERE application_name like 'Truxton Carve%'

Find Unused Indexes

  SELECT relname AS table,
         indexrelname AS index,
         pg_size_pretty(pg_relation_size(i.indexrelid)) AS index_size,
         idx_scan AS index_scans
    FROM pg_stat_user_indexes ui
    JOIN pg_index i ON ui.indexrelid = i.indexrelid
   WHERE NOT indisunique 
     AND idx_scan = 0
     AND pg_relation_size(relid) > 5 * 8192
ORDER BY pg_relation_size(i.indexrelid) / nullif(idx_scan, 0) DESC NULLS FIRST,
         pg_relation_size(i.indexrelid) DESC;

Rename a Database

Sometimes, you may want to stash your current instance of Truxton so you can stand up another and do some testing. You can rename the database, do your work, then rename it back when you are done. Here's the SQL to rename a database.

ALTER DATABASE "Truxton" RENAME TO "SavedTruxton";

Where it gets tricky is you can't rename a database you have a connection to. You can get around this problem by stopping Truxton services on the network, closing all Analyst desktops, and use the psql.exe command line tool.

"C:\Program Files\PostgreSQL\15\bin\psql.exe" --host=localhost --port=5432 --username=postgres --quiet "--command=ALTER DATABASE ""Truxton"" RENAME TO ""SavedTruxton"";"

Create a Database

Here are the steps to manually create a Truxton database.

Create the database.

"C:\Program Files\PostgreSQL\15\bin\psql.exe" --host=localhost --port=5432 --username=postgres --quiet "--command=CREATE DATABASE ""Truxton"" TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'C' LC_CTYPE = 'C' CONNECTION LIMIT = -1;"

Never create something without documenting it. Add a comment to the newly created database.

"C:\Program Files\PostgreSQL\15\bin\psql.exe" --host=localhost --port=5432 --username=postgres --quiet "--command=COMMENT ON DATABASE ""Truxton"" IS 'This is the Truxton database. It holds all of the data about files, artifacts and investigative work.';"

Now that the database is created, we must add the tables to it to store the Truxton data. All of the SQL scripts required to create a new instance of Truxton are in the C:\Program Files\Truxton\Database folder.

"C:\Program Files\PostgreSQL\15\bin\psql.exe" --dbname=Truxton --host=localhost --port=5432 --username=postgres --quiet "--file=C:\Program Files\Truxton\Database\create-tables.sql"
"C:\Program Files\PostgreSQL\15\bin\psql.exe" --dbname=Truxton --host=localhost --port=5432 --username=postgres --quiet "--file=C:\Program Files\Truxton\Database\create-tables_1.sql"
"C:\Program Files\PostgreSQL\15\bin\psql.exe" --dbname=Truxton --host=localhost --port=5432 --username=postgres --quiet "--file=C:\Program Files\Truxton\Database\create-tables_2.sql"

The last step in the process is to populate the tables with default data.

"C:\Program Files\PostgreSQL\15\bin\psql.exe" --dbname=Truxton --host=localhost --port=5432 --username=postgres --quiet "--file=C:\Program Files\Truxton\Database\insert-data.sql"
"C:\Program Files\PostgreSQL\15\bin\psql.exe" --dbname=Truxton --host=localhost --port=5432 --username=postgres --quiet "--file=C:\Program Files\Truxton\Database\insert-data_1.sql"
"C:\Program Files\PostgreSQL\15\bin\psql.exe" --dbname=Truxton --host=localhost --port=5432 --username=postgres --quiet "--file=C:\Program Files\Truxton\Database\insert-data_2.sql"
"C:\Program Files\PostgreSQL\15\bin\psql.exe" --dbname=Truxton --host=localhost --port=5432 --username=postgres --quiet "--file=C:\Program Files\Truxton\Database\insert-data_3.sql"
"C:\Program Files\PostgreSQL\15\bin\psql.exe" --dbname=Truxton --host=localhost --port=5432 --username=postgres --quiet "--file=C:\Program Files\Truxton\Database\insert-data_4.sql"

The next time you start the Truxton Service, the rest of the tables will be populated with dynamic.

Determine Table Packing

Postgres will pad pages in the database. Pad bytes are wasted space. To minimize the wasted space, use the following query to help guide the order in which you create your columns.

SELECT a.attname,
       t.typname,
       t.typalign,
       t.typlen
  FROM pg_class c
  JOIN pg_attribute a ON (a.attrelid = c.oid)
  JOIN pg_type t ON (t.oid = a.atttypid)
 WHERE c.relname = 'File'
   AND a.attnum >= 0
ORDER BY t.typlen DESC, t.typname ASC, a.attname ASC

Files

Queries to get file lists.

Human Readable List

This will list files in a human readable format. The WHERE clause is commented out so you can add your own filter criteria.

SELECT "File"."ID",
       "File"."MediaID",
       "File"."ParentFileID",
       "Origin"."Name" AS "Origin",
       "FileType"."ShortName" AS "FileType",
       "FN2"."Name",
       "File"."OSLength",
       "File"."HashID",
       "File"."RawEntropy" * 0.00024414502395702993865779595324564 AS "Entropy",
       "Filename"."Name" AS "Path",
       "File"."FullPathID",
       "File"."FileTypeID",
       to_hex( "File"."Signature" ) AS "Signature",
       to_hex( "File"."PhysicalDiskOffset" ) AS "DiskOffset",
       "FileType"."LongName" AS "FileTypeDescription",
       "File"."Created",
       "File"."LastWrite",
       "ContentStatus"."Name" AS "CStatus",
       "File"."NumberOfChildren",
       "File"."FilenameID"
FROM "File"
LEFT OUTER JOIN "FileType" ON ( "File"."FileTypeID" = "FileType"."ID" )
LEFT OUTER JOIN "Filename" AS "FN2" ON ( "File"."FilenameID" = "FN2"."ID" )
LEFT OUTER JOIN "Filename" ON ( "File"."FullPathID" = "Filename"."ID" )
LEFT OUTER JOIN "Origin" ON ( "File"."OriginID" = "Origin"."ID" )
LEFT OUTER JOIN "ContentStatus" ON ( "File"."ContentStatusID" = "ContentStatus"."ID" )
--WHERE "File"."Signature" = -623191334
ORDER BY "File"."ID"

Files with Depot Locations

SELECT "File"."ID",
       "File"."ParentFileID",
       "File"."FileTypeID",
       "FileType"."ShortName",
       "Filename"."Name",
       "Content"."Offset",
       "Content"."Length",
       "Depot"."Filename"
FROM "File", 
     "Depot",
     "Filename",
     "Content",
     "FileType"
WHERE "File"."HashID" = "Content"."Hash" 
  AND "File"."FilenameID" = "Filename"."ID"
  AND "Content"."DepotID" = "Depot"."ID"
  AND "File"."FileTypeID" = "FileType"."ID"

Carved Files

This query is useful when examining Truxton's file carving quality. This particular query is looking for carved JPEG and JPEG with EXIF files in a specific piece of media.

SELECT DISTINCT("File"."ID"),
       "FileType"."ShortName",
       "Filename"."Name",
       "Content"."Length",
       "Content"."Offset",
       "File"."PhysicalDiskOffset",
       "File"."HashID" AS "MD5",
       "MD5Hash"."SHA1",
       "Depot"."Filename" AS "Depot Path"
FROM "File", 
     "Depot",
     "Filename",
     "Content",
     "FileType",
     "MD5Hash"
WHERE "File"."HashID" = "Content"."Hash" 
  AND "File"."FilenameID" = "Filename"."ID"
  AND "Content"."DepotID" = "Depot"."ID"
  AND "File"."FileTypeID" = "FileType"."ID"
  AND "MD5Hash"."ID" = "File"."HashID"
  AND "File"."OriginID" = 3
  AND "File"."FileTypeID" IN (203, 204)
  AND "File"."MediaID" = '27cfac98-af34-1cc9-6812-a18573e44e0e'::uuid
  ORDER BY "Depot Path" ASC, "Offset" ASC

Files Truxton Can Exploit

SELECT "ShortName",
       "LongName"
FROM "FileType"
WHERE "ID" IN
(
  SELECT DISTINCT("FileTypeID")
  FROM "ETLRoute"
  WHERE "ETLQueueName" IN ('expand', 'email', 'remoteexpand', 'archives', 'pfe                             ')
)
ORDER BY "ShortName"

This one produces a single line of output that is easier for a human to read.

SELECT CONCAT( "ShortName", ' - ', "LongName" )
FROM "FileType"
WHERE "ID" IN
(
  SELECT DISTINCT("FileTypeID")
  FROM "ETLRoute"
  WHERE "ETLQueueName" IN ('expand', 'email', 'remoteexpand', 'archives', 'pfe                             ')
)
ORDER BY "ShortName"

NOTE the pfe item is a bug. There should NOT be spaces in that name.

Number of Files of Type in Media

This will produce a list of the count of files of a particular type in media.

  SELECT "MediaID",
          COUNT("FileTypeID") AS "FileCount"
    FROM "File"
   WHERE "FileTypeID" = 201
GROUP BY "MediaID",
         "FileTypeID"
ORDER BY COUNT("FileTypeID") DESC

Find Investigation Name given File ID

SELECT "Investigation"."Name"
FROM "Investigation"
LEFT OUTER JOIN "InvestigationMedia" ON ("InvestigationMedia"."InvestigationID" = "Investigation"."ID")
LEFT OUTER JOIN "Media" ON ("Media"."ID" = "InvestigationMedia"."MediaID")
LEFT OUTER JOIN "File" ON ("File"."MediaID" = "Media"."ID")
WHERE "File"."ID" = '645ff09a-6425-1bf8-8542-9bdc00035f8c'::uuid

Find Investigations given File Signature

The following query will find all investigations that contain a file that starts with the byte sequence of 00 00 03 00

SELECT DISTINCT( "Investigation"."Name" )
FROM "Investigation"
LEFT OUTER JOIN "InvestigationMedia" ON ("InvestigationMedia"."InvestigationID" = "Investigation"."ID")
LEFT OUTER JOIN "Media" ON ("Media"."ID" = "InvestigationMedia"."MediaID")
LEFT OUTER JOIN "File" ON ("File"."MediaID" = "Media"."ID")
WHERE "File"."Signature" = cast(x'00000300' as int)

Find Investigations given File Name Pattern

The following query will find all investigations that contain a CSV file.

SELECT DISTINCT "Investigation"."Name"
FROM "Investigation"
LEFT OUTER JOIN "InvestigationMedia" ON ("InvestigationMedia"."InvestigationID" = "Investigation"."ID")
LEFT OUTER JOIN "Media" ON ("Media"."ID" = "InvestigationMedia"."MediaID")
LEFT OUTER JOIN "File" ON ("File"."MediaID" = "Media"."ID")
LEFT OUTER JOIN "Filename" ON ("Filename"."ID" = "File"."FilenameID" )
WHERE "Filename"."Name" ILIKE  '%.csv'

Find File Names for File Type

This query will show you all of the names of files of type Android Battery History

SELECT DISTINCT "Filename"."Name"
FROM "File"
LEFT OUTER JOIN "Filename" ON ("Filename"."ID" = "File"."FilenameID")
WHERE "FileTypeID" = 1021

Make a Hashset From Loaded Data

When you get a lot of data loaded, you will notice that there's lots of files that are repeated. These files will probably never make it into the ]https://www.nist.gov/itl/ssd/software-quality-group/national-software-reference-library-nsrl NSRL]. Using this query, you can create your own hashset to eliminate these files in future loads.

This queries the database for hashes of files that haven't already been eliminated and occurs more than 50 times.

SELECT UPPER(SUBSTRING( "HashID"::text FROM 1 FOR 8 ) ||
SUBSTRING( "HashID"::text FROM 10 FOR 4 ) ||
SUBSTRING( "HashID"::text FROM 15 FOR 4 ) ||
SUBSTRING( "HashID"::text FROM 20 FOR 4 ) ||
SUBSTRING( "HashID"::text FROM 25 FOR 12 )) AS "Hash"
FROM "File"
WHERE "OriginID" = 1 AND
"ContentStatusID" = 1 AND
"HashID" NOT IN ( '00000000-0000-0000-0000-000000000000'::uuid, 'd41d8cd9-8f00-b204-e980-0998ecf8427e'::uuid )
GROUP BY "HashID"
HAVING COUNT("HashID") > 50
ORDER BY "Hash" ASC

Logs

All Messages

SELECT "Log"."MediaID",
       "Log"."When",
       "T"."String" AS "Type",
       "S"."String" AS "Source",
       "Log"."ProcessID",
       "Log"."ThreadID",
       "C"."String" AS "Client",
       "U"."String" AS "User",
       "M"."String" AS "Message"
FROM "Log",
     "LogString" AS "S",
     "LogString" AS "C",
     "LogString" AS "U",
     "LogString" AS "M",
     "LogString" AS "T"
WHERE "Log"."LogStringSourceID" = "S"."ID" 
  AND "Log"."LogStringClientID" = "C"."ID" 
  AND "Log"."LogStringUserID" = "U"."ID" 
  AND "Log"."LogStringMessageID" = "M"."ID" 
  AND "Log"."LogStringTypeID" = "T"."ID"
-- AND "Log"."LogStringTypeID" <> 2 -- Don't show Info messages
-- AND "Log"."MediaID" = '11111111-1111-1111-1111-000000000001'::uuid
ORDER BY "Log"."MediaID",
         "Log"."When"

Only Error and Warnings

SELECT "Log"."MediaID",
       "Log"."When",
       "Log"."LogStringTypeID",
       "T"."String" AS "Type",
       "S"."String" AS "Source",
       "Log"."ProcessID",
       "Log"."ThreadID",
       "C"."String" AS "Client",
       "U"."String" AS "User",
       "M"."String" AS "Message"
FROM "Log",
     "LogString" AS "S",
     "LogString" AS "C",
     "LogString" AS "U",
     "LogString" AS "M",
     "LogString" AS "T"
WHERE "Log"."LogStringSourceID" = "S"."ID" 
  AND "Log"."LogStringClientID" = "C"."ID" 
  AND "Log"."LogStringUserID" = "U"."ID" 
  AND "Log"."LogStringMessageID" = "M"."ID" 
  AND "Log"."LogStringTypeID" = "T"."ID"
  AND "Log"."LogStringTypeID" IN ( 0, 1 ) -- Don't show Info messages
-- AND "Log"."MediaID" = '11111111-1111-1111-1111-000000000001'::uuid
ORDER BY "Log"."MediaID",
         "Log"."When"

Load Log

The following query is used to generate the Load Log. You can run this query as media is being loaded to peek at the log.

SELECT "Log"."When",
       "T"."String" AS "Type",
       "S"."String" AS "Source",
       "Log"."ProcessID",
       "Log"."ThreadID",
       "C"."String" AS "Client",
       "U"."String" AS "User",
       "M"."String" AS "Message",
       "Log"."LogStringMessageID"
FROM "Log",
     "LogString" AS "S",
     "LogString" AS "C",
     "LogString" AS "U",
     "LogString" AS "M",
     "LogString" AS "T"
WHERE "Log"."MediaID" = '11111111-1111-1111-1111-000000000001'::uuid
  AND "Log"."LogStringSourceID" = "S"."ID"
  AND "Log"."LogStringClientID" = "C"."ID"
  AND "Log"."LogStringUserID" = "U"."ID"
  AND "Log"."LogStringMessageID" = "M"."ID"
  AND "Log"."LogStringTypeID" = "T"."ID"
ORDER BY "Log"."When",
         "Source";

Depots

These queries will tell you if you're missing depot file entries.

Missing Content Depots

SELECT DISTINCT( "DepotID" )
FROM "Content"
WHERE "DepotID" <> '00000000-0000-0000-0000-000000000000'::uuid
  AND "DepotID" NOT IN ( SELECT "ID" FROM "Depot" )
ORDER BY "DepotID"

Missing Image Depots

SELECT DISTINCT( "DepotID" )
FROM "Image"
WHERE "DepotID" <> '00000000-0000-0000-0000-000000000000'::uuid
  AND "DepotID" NOT IN
  (
    SELECT "ID" FROM "Depot"
  )
ORDER BY "DepotID"

Depot IDs in Use

SELECT DISTINCT "DepotID" AS "ID" FROM
(
  SELECT "DepotID" FROM "Content"
  UNION
  SELECT "DepotID" FROM "Free"
  UNION
  SELECT "DepotID" FROM "Image"
  UNION
  SELECT "DepotID" FROM "Slack"
  UNION
  SELECT "ThumbnailDepotID" AS "DepotID" FROM "ReviewItem"
) AS "SubSelect"

Depot IDs Not in Use

SELECT "ID"
FROM "Depot"
WHERE "ID" NOT IN
(
 SELECT DISTINCT "DepotID" AS "ID" FROM
 (
  SELECT "DepotID" FROM "Content"
  UNION
  SELECT "DepotID" FROM "Free"
  UNION
  SELECT "DepotID" FROM "Image"
  UNION
  SELECT "DepotID" FROM "Slack"
  UNION
  SELECT "ThumbnailDepotID" AS "DepotID" FROM "ReviewItem"
 ) AS "SubSelect"
)

URI in Filename Column

To detect when an ETL has put bad data into the [Depot] table.

SELECT "ID",
       "Filename",
       "URI",
       "DepotTypeID",
       "MaximumSize",
       "Size",
       "IsLongTerm",
       "CryptoKey",
       "DepotStatusID"
FROM "Depot"
WHERE "Filename" LIKE '\\%'

Change Depot Folder

If you move your depot folder, the path will have to be updated in the database. This statement will change the depot drive letter from D to E.

UPDATE "Depot"
SET "Filename" = REPLACE( "Filename", 'D:\\', 'E:\\')
WHERE "Filename" LIKE 'D:\\%'

Change Depot URI

When you move depots to a new share, the database needs to be fixed up. This sample is what you would run if you need to change the machine name SOFS-01 to a fully qualified DNS name.

UPDATE "Depot"
SET "URI" = REPLACE( "URI", '\\SOFS-01\', '\\sofs-01.truxton.lab\Depot\')
WHERE "URI" LIKE '\\\\sofs-01.truxton.lab%'

If you move depots to Amazon S3 Storage:

UPDATE "Depot"
SET "URI" = REPLACE( "URI", '\\Machine\Share\Depot\', 'https://depot-bucket-a730.s3.amazonaws.com/' )

Change Depot Path

If you move depots to a new folder, you must update the [Filename] column of the [Depot] table. This sample is what you would do if you moved the depot folder from C:\Truxton Data to C:\Storage

UPDATE "Depot"
SET "Filename" = REPLACE( "Filename", 'C:\Truxton Data\Depot\', 'C:\Storage\Depot\' )

Python Script to Change Depot Path

This is can also be performed in a Python script. It might be handy to save this script to your computer as ChangeDepotPaths.py

import sys
sys.path.append('C:/Program Files/Truxton/SDK')
import truxton
# To get psycopg use pip:
# pip install "psycopg[binary,pool]"
import psycopg

def main() -> None:
  t = truxton.create()

  with psycopg.connect(t.connectionstring) as database_connection:
    with database_connection.cursor() as database_cursor:
      database_cursor.execute("UPDATE \"Depot\" SET \"Filename\" = REPLACE(\"Filename\", %s, %s )", (sys.argv[1],sys.argv[2]))
      database_connection.commit()
      database_cursor.close()

  return None

if __name__ == "__main__":
    sys.exit(main())

File Type Counts

SELECT "FileTypeID",
       "FileType"."ShortName",
       COUNT( "FileTypeID" ) AS "NumberOfThatType"
FROM "File"
INNER JOIN "FileType" ON ( "FileType"."ID" = "File"."FileTypeID" )
GROUP BY "FileTypeID",
         "FileType"."ShortName"
ORDER BY "NumberOfThatType" DESC

Email/SMS/Communications

Messages and Subjects

SELECT "Message"."ID",
       "MessageType"."ShortName" AS "Type",
       "Sent", "Received", 
       "MessageSubject"."Text" AS "Subject",
       "NumberOfBodies",
       "NumberOfAttachments"
FROM "Message"
LEFT OUTER JOIN "MessageSubject" ON ( "Message"."MessageSubjectID" = "MessageSubject"."ID" )
LEFT OUTER JOIN "MessageType" ON ( "Message"."MessageTypeID" = "MessageType"."ID" )
ORDER BY "Message"."Sent"

Find Messages that did not Parse

SELECT "File"."ID",
       "FN2"."Name",
       "File"."OSLength",
       "File"."HashID",
       "ContentStatus"."Name" AS "CStatus",
       "File"."NumberOfChildren"
FROM "File"
LEFT OUTER JOIN "Filename" AS "FN2" ON ( "File"."FilenameID" = "FN2"."ID" )
LEFT OUTER JOIN "Origin" ON ( "File"."OriginID" = "Origin"."ID" )
LEFT OUTER JOIN "ContentStatus" ON ( "File"."ContentStatusID" = "ContentStatus"."ID" )
WHERE "File"."FileTypeID" = 199
  AND "File"."ID" NOT IN
      (
        SELECT "FileID" FROM "Message"
      )

Human Readable Message Addresses

SELECT "MessageAddress"."ID", 
       "MessageAddress"."AccountID", "Account"."Name",
       "MessageAddress"."DomainID", "Domain"."Name",
       to_hex("MessageAddress"."CombinedID") AS "CombinedID (Hex)", 
       "MessageAddress"."DescriptionID", "MessageAddressDescription"."Description"
FROM "MessageAddress"
JOIN "Account" ON ( "Account"."ID" = "MessageAddress"."AccountID" )
JOIN "Domain" ON ( "Domain"."ID" = "MessageAddress"."DomainID" )
JOIN "MessageAddressDescription" ON ( "MessageAddressDescription"."ID" = "MessageAddress"."DescriptionID" )

SSID

SSID and Associated Passwords

This will produce a list of WiFi connection points with their password in a given piece of media.

SELECT DISTINCT "A_ID" AS "PasswordEntityID",
       "B_ID" AS "SSIDEntityID",
       "SSID"."Value" AS "SSID",
       "Password"."Value" AS "Password"
FROM "Relation"
INNER JOIN "Entity" AS "PasswordEntity" ON ( "Relation"."A_ID" = "PasswordEntity"."ID" )
INNER JOIN "Entity" AS "SSIDEntity" ON ( "Relation"."B_ID" = "SSIDEntity"."ID" )
INNER JOIN "EntityString" AS "Password" ON ("PasswordEntity"."EntityStringID" = "Password"."ID")
INNER JOIN "EntityString" AS "SSID" ON ("SSIDEntity"."EntityStringID" = "SSID"."ID")
WHERE "B_ID" IN 
(
  SELECT "ID"
    FROM "Entity"
   WHERE "MediaID" = '0e8beeee-1c42-91c3-e0b1-93edf6ed2d73'::uuid
     AND "EntityTypeID" = 25
)
ORDER BY "SSID"."Value"

Entity

These queries are for entities.

Human Readable

SELECT "EntityString"."Value", 
       "EntityType"."LongName",
       "Entity"."ID",
       "Entity"."FileID", 
       "Entity"."EntityTypeID", 
       "Entity"."EntityStringID",
       "Entity"."Offset",
       "Entity"."ObjectID", 
       "Entity"."ObjectTypeID",
       "Entity"."Length",
       "Entity"."DataTypeID",
       "ObjectType"."Name"
FROM "Entity"
INNER JOIN "EntityType" ON ( "Entity"."EntityTypeID" = "EntityType"."ID" )
INNER JOIN "EntityString" ON ( "Entity"."EntityStringID" = "EntityString"."ID" )
INNER JOIN "ObjectType" ON ( "Entity"."ObjectTypeID" = "ObjectType"."ID" )
WHERE "Entity"."MediaID" = '139db997-1f71-58d2-46ae-e052bb67a946'::uuid
ORDER BY "EntityString"."Value"

List Accounts with Passwords

This will produce a list of accounts and their associated passwords for a given media.

SELECT "B"."Value" AS "Account",
       "A"."Value" AS "Password"
FROM "Relation"
LEFT OUTER JOIN "Entity" AS "AEntity" ON "Relation"."A_ID" = "AEntity"."ID"
LEFT OUTER JOIN "EntityString" AS "A" ON "AEntity"."EntityStringID" = "A"."ID"
LEFT OUTER JOIN "Entity" AS "BEntity" ON "Relation"."B_ID" = "BEntity"."ID"
LEFT OUTER JOIN "EntityString" AS "B" ON "BEntity"."EntityStringID" = "B"."ID"
WHERE "Relation"."RelationTypeID" = 11
AND "Relation"."B_ID" IN
(
   SELECT "ID" FROM "Entity" WHERE "EntityTypeID" = 1
)
AND "Relation"."SourceID" IN
(
   SELECT "ID" FROM "File" WHERE "MediaID" = '1b35dc5e-aa4b-a36f-6318-2465775e6df8'::uuid
)

List NTHashes and Associated Passwords

This will produce a list of passwords and their NTHash in a given media.

A relation type of 11 is a password relation. An entity type of 38 is an NT hash entity.

SELECT "B"."Value" AS "NTHash",
       "A"."Value" AS "Password"
FROM "Relation"
LEFT OUTER JOIN "Entity" AS "AEntity" ON "Relation"."A_ID" = "AEntity"."ID"
LEFT OUTER JOIN "EntityString" AS "A" ON "AEntity"."EntityStringID" = "A"."ID"
LEFT OUTER JOIN "Entity" AS "BEntity" ON "Relation"."B_ID" = "BEntity"."ID"
LEFT OUTER JOIN "EntityString" AS "B" ON "BEntity"."EntityStringID" = "B"."ID"
WHERE "Relation"."RelationTypeID" = 11
AND "B_ID" IN
(
   SELECT "ID" FROM "Entity" WHERE "EntityTypeID" = 38
)
AND "SourceID" IN
(
   SELECT "ID" FROM "File" WHERE "MediaID" = '1b35dc5e-aa4b-a36f-6318-2465775e6df8'::uuid
)

This will produce a unique list of hashes and passwords for all of Truxton.

A relation type of 11 is a password relation. An entity type of 38 is an NT hash entity.

SELECT DISTINCT "B"."Value" AS "NTHash",
                "A"."Value" AS "Password"
FROM "Relation"
LEFT OUTER JOIN "Entity" AS "AEntity" ON "Relation"."A_ID" = "AEntity"."ID"
LEFT OUTER JOIN "EntityString" AS "A" ON "AEntity"."EntityStringID" = "A"."ID"
LEFT OUTER JOIN "Entity" AS "BEntity" ON "Relation"."B_ID" = "BEntity"."ID"
LEFT OUTER JOIN "EntityString" AS "B" ON "BEntity"."EntityStringID" = "B"."ID"
WHERE "Relation"."RelationTypeID" = 11
AND "B_ID" IN
(
   SELECT "ID" FROM "Entity" WHERE "EntityTypeID" = 38
)
ORDER BY "Password"

Finding a Password by NTHash

This will find a password for a given NTHash.

SELECT DISTINCT "B"."Value" AS "NTHash",
                "A"."Value" AS "Password"
FROM "Relation"
LEFT OUTER JOIN "Entity" AS "AEntity" ON "Relation"."A_ID" = "AEntity"."ID"
LEFT OUTER JOIN "EntityString" AS "A" ON "AEntity"."EntityStringID" = "A"."ID"
LEFT OUTER JOIN "Entity" AS "BEntity" ON "Relation"."B_ID" = "BEntity"."ID"
LEFT OUTER JOIN "EntityString" AS "B" ON "BEntity"."EntityStringID" = "B"."ID"
WHERE "Relation"."RelationTypeID" = 11
AND "B_ID" IN
(
   SELECT "ID" FROM "Entity" WHERE "EntityTypeID" = 38
)
AND "B"."Value" = 'becedb42ec3c5c7f965255338be4453c'

List of NTHashes That Need Cracking

This will produce a unique list of NT Logon Password Hashes that have not been cracked.

SELECT DISTINCT "EntityString"."Value" AS "NTHash"
FROM "Entity"
INNER JOIN "EntityString" ON ( "Entity"."EntityStringID" = "EntityString"."ID" )
WHERE "Entity"."EntityTypeID" = 38
AND "Entity"."ID" NOT IN
(
  SELECT "B_ID"
  FROM "Relation"
  LEFT OUTER JOIN "Entity" ON ("Entity"."ID" = "Relation"."B_ID") 
  WHERE "Relation"."RelationTypeID" = 11
  AND "Entity"."EntityTypeID" = 38
)

Here's a couple of websites that give you a taste of hash cracking:

List of All Passwords Found

If you wanted to create a password dictionary...

SELECT DISTINCT "EntityString"."Value" AS "Password"
FROM "Entity"
INNER JOIN "EntityString" ON ( "Entity"."EntityStringID" = "EntityString"."ID" )
WHERE "Entity"."EntityTypeID" = 26

Global List of Wifi Access Points with MAC Addresses

This will produce a list of WiFi Access points and their associated MAC addresses.

SELECT DISTINCE "EntityA"."MediaID",
                "EntityStringA"."Value" AS "MACAddress",
                "EntityStringB"."Value" AS "SSID"
FROM "Relation"
INNER JOIN "Entity" AS "EntityA" ON ( "Relation"."A_ID" = "EntityA"."ID" )
INNER JOIN "Entity" AS "EntityB" ON ( "Relation"."B_ID" = "EntityB"."ID" )
INNER JOIN "EntityString" AS "EntityStringA" ON ("EntityA"."EntityStringID" = "EntityStringA"."ID")
INNER JOIN "EntityString" AS "EntityStringB" ON ("EntityB"."EntityStringID" = "EntityStringB"."ID")
WHERE 
(
  "Relation"."A_ID" IN 
  (
    SELECT "Entity"."ID"
    FROM "Entity" 
    WHERE "Entity"."EntityTypeID" = 4
  )
  AND 
  "Relation"."B_ID" IN 
  (
    SELECT "Entity"."ID"
    FROM "Entity" 
    WHERE "Entity"."EntityTypeID" = 25
  )
)

Loads

How Long did the Loads Take

This query is useful only when you are loading a bunch of media for an investigation at the same time.

SELECT MIN("Event"."Start"), MAX("Event"."End")
FROM "Event"
LEFT OUTER JOIN "InvestigationMedia" ON ("InvestigationMedia"."MediaID" = "Event"."MediaID")
WHERE "Event"."EventTypeID" = 1002
AND "InvestigationMedia"."InvestigationID" = '20171021-0000-1111-2222-333333333333'::uuid

Load Times and File Count

This will give you a list of media loads from shortest to longest load time with the number of files in that media.

  SELECT "Event"."Start",
         "Event"."End",
         "Event"."Title",
         "Event"."Description",
        ((DATE_PART('day',    "Event"."End" - "Event"."Start") * 24 +
          DATE_PART('hour',   "Event"."End" - "Event"."Start")) * 60 +
          DATE_PART('minute', "Event"."End" - "Event"."Start")) AS "Duration",
          COUNT("File"."ID") AS "FileCount"
    FROM "Event",
         "File"
   WHERE "Event"."EventTypeID" = 1002
     AND "File"."MediaID" = "Event"."MediaID"
GROUP BY "Event"."Start",
         "Event"."End",
         "Event"."Title",
         "Event"."Description"
ORDER BY "Duration" ASC

Load Speed

This will calculate the speed at which Truxton loaded data. It assumes that the loads were done in a single batch. The time window is from the beginning of the load of the first media until the completion of the load of the last media to finish. This is not a good query for everyday use, rather, it is used during test and evaluation to determine optimal configuration of your loader systems.

SELECT COUNT("Media"."ID") AS "Media",
(SUM("Media"."Size") / 1073741824) AS "Gigabytes",
(SUM("Media"."Size") / 1099511627776) AS "Terabytes",
EXTRACT( EPOCH FROM (MAX("Event"."End") - MIN("Event"."Start"))) / 3600 AS "Hours",
(((SUM("Media"."Size") / EXTRACT( EPOCH FROM (MAX("Event"."End") - MIN("Event"."Start")))) * 3600 ) / 1073741824) AS "Gigabytes per hour"
  FROM "Media"
  LEFT OUTER JOIN "Event" ON ( "Media"."ID" = "Event"."MediaID" )
 WHERE "Media"."MediaStatusID" = 230
   AND "Event"."EventTypeID" = 1002

Reset

If you want to quickly wipe out the data on a Truxton node you must clear out the [Truxton] database and [TruxtonMessageBus] database.

Clear Truxton Database

The following script will clear the tables in Truxton and re-initialize tables to their defaults.

TRUNCATE TABLE "Account";
TRUNCATE TABLE "Alert";
TRUNCATE TABLE "Content";
TRUNCATE TABLE "Depot";
TRUNCATE TABLE "DepotLocalName";
TRUNCATE TABLE "EXIF";
TRUNCATE TABLE "Entity";
TRUNCATE TABLE "EntityString";
TRUNCATE TABLE "Event";
TRUNCATE TABLE "ExpandedFile";
TRUNCATE TABLE "File";
TRUNCATE TABLE "Filename";
TRUNCATE TABLE "Free";
TRUNCATE TABLE "Hash";
TRUNCATE TABLE "HashList";
TRUNCATE TABLE "Image";
TRUNCATE TABLE "Investigation";
TRUNCATE TABLE "InvestigationEvent";
TRUNCATE TABLE "InvestigationMedia";
TRUNCATE TABLE "InvestigationSuspect";
TRUNCATE TABLE "Investigator";
TRUNCATE TABLE "InvestigatorAction";
TRUNCATE TABLE "InvestigatorNote";
TRUNCATE TABLE "InvestigatorSetting";
TRUNCATE TABLE "Location";
TRUNCATE TABLE "Log";
TRUNCATE TABLE "LogString";
TRUNCATE TABLE "MD5Hash";
TRUNCATE TABLE "Media";
TRUNCATE TABLE "Message";
TRUNCATE TABLE "MessageAddress";
TRUNCATE TABLE "MessageAddressDescription";
TRUNCATE TABLE "MessageAddress_Message";
TRUNCATE TABLE "MessagePiece";
TRUNCATE TABLE "MessageSubject";
TRUNCATE TABLE "NormalizedPhoneNumber";
TRUNCATE TABLE "QueryResult";
TRUNCATE TABLE "Relation";
TRUNCATE TABLE "Review";
TRUNCATE TABLE "ReviewItem";
TRUNCATE TABLE "ReviewSource";
TRUNCATE TABLE "ReviewTag";
TRUNCATE TABLE "Slack";
TRUNCATE TABLE "Statistics";
TRUNCATE TABLE "Suspect";
TRUNCATE TABLE "Tagged";
TRUNCATE TABLE "URL";
TRUNCATE TABLE "USBDevice";
TRUNCATE TABLE "WebsiteVisit";

INSERT INTO "Account"
("ID", "Name")
VALUES
(-7327889212648916261, ''),
(0, '');

INSERT INTO "Filename"
("ID", "Name")
VALUES
(-7327889212648916261, '{{Empty}}'),
(-866105518799526845, 'Carved'),
(0, '{{Root}}');

INSERT INTO "LogString"
("ID", "String")
VALUES
(0, 'Error'),
(1, 'Warning' ),
(2, 'Info' ),
(3, 'Debug' ),
(4, 'Message Queue' ),
(5, 'Black Box' ),
(6, 'SQL' ),
(7, 'Fatal Error' ),
(-8085263649674490830, 'Truxton Registry Expander' ),
(-6832967734355980866, 'Truxton Text Extract' ),
(-6062509211486031558, 'Truxton Archive Expander' ),
(-5747447535886389339, 'Truxton Thumbnail Generator' ),
(-2906983147363067615, 'Truxton Carve' ),
(-377266439974675442, 'Truxton Expand' ),
(428013036636956400, 'Load' ),
(432078912489116784, 'Report' ),
(1224517872063755206, 'Truxton Finished Loads Monitor' ),
(2162449389329439753, 'Truxton SOLR Contents Indexer' ),
(2883055283884316118, 'Truxton Email' ),
(4453584294307222386, 'Truxton Identify' ),
(4630287248337094523, 'RegRipper' ),
(5974441673585158901, 'Truxton SOLR File Indexer' ),
(8079766870631284876, 'Truxton Contact Sheet Creator' );

INSERT INTO "URL"
("ID", "URL")
VALUES
(0, '');

Clear Message Bus Database

This script will clear all of the message queues. The first three queues are the high volume queues.

TRUNCATE TABLE "expand";
TRUNCATE TABLE "status";
TRUNCATE TABLE "load";
TRUNCATE TABLE "alert";
TRUNCATE TABLE "archives";
TRUNCATE TABLE "carve";
TRUNCATE TABLE "contactsheet";
TRUNCATE TABLE "email";
TRUNCATE TABLE "finished";
TRUNCATE TABLE "identify";
TRUNCATE TABLE "langid";
TRUNCATE TABLE "maintenance";
TRUNCATE TABLE "notify";
TRUNCATE TABLE "pfe";
TRUNCATE TABLE "poly";
TRUNCATE TABLE "pst";
TRUNCATE TABLE "registry";
TRUNCATE TABLE "regripper";
TRUNCATE TABLE "remoteexpand";
TRUNCATE TABLE "report";
TRUNCATE TABLE "solrcontentstage";
TRUNCATE TABLE "solrfile";
TRUNCATE TABLE "stitch";
TRUNCATE TABLE "thumbnail";
TRUNCATE TABLE "tqueue";
TRUNCATE TABLE "yara";

Yellowbrick

Find Duplicate Records

Tables like [EntityString] are supposed to be unique by the [ID] column. However, since Yellowbrick is a data warehouse, uniqueness is not enforced. You can detect duplicate primary keys by:

  SELECT "ID", 
         COUNT("ID") AS "Copies"
    FROM "EntityString"
GROUP BY "ID"
  HAVING COUNT("ID") > 1
ORDER BY "Copies" DESC,
         "ID" ASC

Remove Duplicate Records

Now that you found some duplicates, you probably want to delete them.

WITH rows(rid, rn) AS
(
  SELECT rowid, row_number()
  OVER
  (
    PARTITION BY "ID" ORDER BY "ID"
  )
  FROM "EntityString"
)
DELETE FROM "EntityString" USING rows WHERE rows.rn > 1 and "EntityString".rowid=rows.rid

Message Bus

These queries will work only when connected to the Message Bus database.

Number of Messages for Media

This query will give you a list of media and how many messages are queue for that media in a message queue.

  SELECT "MediaID",
         COUNT("MediaID") AS "MediaCount"
    FROM contactsheet
GROUP BY "MediaID"

See the Coming Messages

This query will show you the next messages to be processed in a message queue

  SELECT "QueueItemID",
         "MediaID"
    FROM  contactsheet
   WHERE "QueueItemAvailable" = true::boolean 
     AND "Priority" >= 0::integer
ORDER BY "Priority" DESC,
         "MediaID" ASC,
         "QueueItemAvailable" ASC,
         "QueueItemID" ASC 
   LIMIT 100

How Many Media are Waiting

This will tell you how many media have messages in the queue.

SELECT COUNT(DISTINCT("MediaID")) FROM contactsheet

Tags

Export Tags as Python Calls

This will export all user generated tags on files, unique by hash to Python calls. It is useful when you want to play with tags.

SELECT CONCAT( ' t.taghash("', REPLACE(CAST("HashID" as text), '-', ''), '", "', "Name", '", "', "Description", '")') AS "Code"
FROM
(
           SELECT DISTINCT
                  "File"."HashID",
                  "Tag"."Name",
                  "Tag"."Description"
             FROM "Tagged"
  LEFT OUTER JOIN "File" ON "File"."ID" = "Tagged"."ItemID"
  LEFT OUTER JOIN "Tag"  ON "Tag"."ID"  = "Tagged"."TagID"
            WHERE "Tagged"."Source" = 2
              AND "Tagged"."ObjectTypeID" = 9
         ORDER BY "File"."HashID" ASC
)
AS A

Locations

Find Sensitive Site Violations

This is the query Truxton uses to find Sensitive Site violations in a piece of media.

SELECT "SensitiveSite"."ID", 
    "SensitiveSite"."SensitiveSiteListID", 
    "SensitiveSite"."Latitude", 
    "SensitiveSite"."Longitude", 
    "SensitiveSite"."Distance", 
    "SensitiveSite"."Name", 
    "Location"."ID" AS "LocationID", 
    "Location"."FileID", 
    "Location"."MediaID", 
    "Location"."Label", 
    "SensitiveSiteList"."Name" AS "ListName", 
    "SensitiveSiteList"."Description" AS "ListDescription", 
    earth_distance( ll_to_earth( "SensitiveSite"."Latitude", "SensitiveSite"."Longitude" ),
                    ll_to_earth( "Location"."Latitude", "Location"."Longitude" ) ) AS "OffendingDistance", 
    "Location"."LocationTypeID", 
    "Location"."Latitude", 
    "Location"."Longitude" 
FROM "SensitiveSite"
INNER JOIN "SensitiveSiteList" ON ( "SensitiveSite"."SensitiveSiteListID" = "SensitiveSiteList"."ID" )
INNER JOIN "Location" ON earth_box( ll_to_earth( "SensitiveSite"."Latitude", "SensitiveSite"."Longitude" ), "SensitiveSite"."Distance" ) @> ll_to_earth( "Location"."Latitude", "Location"."Longitude" ) 
WHERE "Location"."MediaID" = '1b6df4c7-9c11-bd10-44a5-05ab0f09b9e1'::uuid
ORDER BY "ListName", "Label"

External Links

The following links have proven useful: