Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
Arturlang committed Apr 23, 2020
1 parent fe26034 commit 0b34e20
Show file tree
Hide file tree
Showing 16 changed files with 1,303 additions and 92 deletions.
2 changes: 1 addition & 1 deletion .dockerignore
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ TGS3.json
cfg
data
SQL
tgui/node_modules
node_modules
tgstation.dmb
tgstation.int
tgstation.rsc
Expand Down
14 changes: 12 additions & 2 deletions .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -257,6 +257,12 @@ This prevents nesting levels from getting deeper then they need to be.
* Please attempt to clean out any dirty variables that may be contained within items you alter through var-editing. For example, due to how DM functions, changing the `pixel_x` variable from 23 to 0 will leave a dirty record in the map's code of `pixel_x = 0`. Likewise this can happen when changing an item's icon to something else and then back. This can lead to some issues where an item's icon has changed within the code, but becomes broken on the map due to it still attempting to use the old entry.
* Areas should not be var-edited on a map to change it's name or attributes. All areas of a single type and it's altered instances are considered the same area within the code, and editing their variables on a map can lead to issues with powernets and event subsystems which are difficult to debug.

### User Interfaces
* All new player-facing user interfaces must use TGUI.
* Raw HTML is permitted for admin and debug UIs.
* Documentation for TGUI can be found at:
* [tgui/README.md](../tgui/README.md)
* [tgui/tutorial-and-examples.md](../tgui/docs/tutorial-and-examples.md)

### Other Notes
* Code should be modular where possible; if you are working on a new addition, then strongly consider putting it in its own file unless it makes sense to put it with similar ones (i.e. a new tool would go in the "tools.dm" file)
Expand Down Expand Up @@ -337,7 +343,7 @@ for(var/obj/item/sword/S in bag_of_swords)
if(!best_sword || S.damage > best_sword.damage)
best_sword = S
```
specifies a type for DM to filter by.
specifies a type for DM to filter by.

With the previous example that's perfectly fine, we only want swords, but here the bag only contains swords? Is DM still going to try to filter because we gave it a type to filter by? YES, and here comes the inefficiency. Wherever a list (or other container, such as an atom (in which case you're technically accessing their special contents list, but that's irrelevant)) contains datums of the same datatype or subtypes of the datatype you require for your loop's body,
you can circumvent DM's filtering and automatic ```istype()``` checks by writing the loop as such:
Expand Down Expand Up @@ -374,7 +380,7 @@ mob
```
This does NOT mean that you can access it everywhere like a global var. Instead, it means that that var will only exist once for all instances of its type, in this case that var will only exist once for all mobs - it's shared across everything in its type. (Much more like the keyword `static` in other languages like PHP/C++/C#/Java)

Isn't that confusing?
Isn't that confusing?

There is also an undocumented keyword called `static` that has the same behaviour as global but more correctly describes BYOND's behaviour. Therefore, we always use static instead of global where we need it, as it reduces suprise when reading BYOND code.

Expand All @@ -394,6 +400,10 @@ There is no strict process when it comes to merging pull requests. Pull requests

* Please explain why you are submitting the pull request, and how you think your change will be beneficial to the game. Failure to do so will be grounds for rejecting the PR.

* If your pull request is not finished make sure it is at least testable in a live environment. Pull requests that do not at least meet this requirement will be closed. You may request a maintainer reopen the pull request when you're ready, or make a new one.

* While we have no issue helping contributors (and especially new contributors) bring reasonably sized contributions up to standards via the pull request review process, larger contributions are expected to pass a higher bar of completeness and code quality *before* you open a pull request. Maintainers may close such pull requests that are deemed to be substantially flawed. You should take some time to discuss with maintainers or other contributors on how to improve the changes.

## Porting features/sprites/sounds/tools from other codebases

If you are porting features/tools from other codebases, you must give them credit where it's due. Typically, crediting them in your pull request and the changelog is the recommended way of doing it. Take note of what license they use though, porting stuff from AGPLv3 and GPLv3 codebases are allowed.
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/autobuild_tgui.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@ on:
branches:
- 'master'
paths:
- 'tgui-next/**.js'
- 'tgui-next/**.scss'
- 'tgui/**.js'
- 'tgui/**.scss'

jobs:
build:
Expand All @@ -23,7 +23,7 @@ jobs:
node-version: '>=12.13'
- name: Build TGUI
run: bin/tgui --ci
working-directory: ./tgui-next
working-directory: ./tgui
- name: Commit Artifacts
run: |
git config --local user.email "[email protected]"
Expand Down
2 changes: 1 addition & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ matrix:
- tools/travis/check_filedirs.sh tgstation.dme
- tools/travis/check_changelogs.sh
- find . -name "*.php" -print0 | xargs -0 -n1 php -l
- find . -name "*.json" -not -path "./tgui/node_modules/*" -print0 | xargs -0 python3 ./tools/json_verifier.py
- find . -name "*.json" -not -path "*/node_modules/*" -print0 | xargs -0 python3 ./tools/json_verifier.py
- tools/travis/build_tgui.sh
- tools/travis/check_grep.sh
- python3 tools/travis/check_line_endings.py
Expand Down
2 changes: 1 addition & 1 deletion code/controllers/subsystem/tgui.dm
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ SUBSYSTEM_DEF(tgui)
var/basehtml // The HTML base used for all UIs.

/datum/controller/subsystem/tgui/PreInit()
basehtml = file2text('tgui-next/packages/tgui/public/tgui-main.html')
basehtml = file2text('tgui/packages/tgui/public/tgui.html')

/datum/controller/subsystem/tgui/Shutdown()
close_all_uis()
Expand Down
130 changes: 130 additions & 0 deletions code/modules/asset_cache/asset_cache.dm
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
/*
Asset cache quick users guide:
Make a datum at the bottom of this file with your assets for your thing.
The simple subsystem will most like be of use for most cases.
Then call get_asset_datum() with the type of the datum you created and store the return
Then call .send(client) on that stored return value.
You can set verify to TRUE if you want send() to sleep until the client has the assets.
*/


// Amount of time(ds) MAX to send per asset, if this get exceeded we cancel the sleeping.
// This is doubled for the first asset, then added per asset after
#define ASSET_CACHE_SEND_TIMEOUT 7

//When sending mutiple assets, how many before we give the client a quaint little sending resources message
#define ASSET_CACHE_TELL_CLIENT_AMOUNT 8

//This proc sends the asset to the client, but only if it needs it.
//This proc blocks(sleeps) unless verify is set to false
/proc/send_asset(client/client, asset_name, verify = TRUE)
return send_asset_list(client, list(asset_name), verify)

//This proc blocks(sleeps) unless verify is set to false
/proc/send_asset_list(client/client, list/asset_list, verify = TRUE)
if(!istype(client))
if(ismob(client))
var/mob/M = client
if(M.client)
client = M.client

else
return FALSE

else
return FALSE

var/list/unreceived = list()
var/list/sending = list()

for (var/asset_name in asset_list)
var/asset_file = SSassets.cache[asset_name]
if (!asset_file)
continue

var/asset_md5 = md5(asset_file) || md5(fcopy_rsc(asset_file))

if (client.sent_assets[asset_name] == asset_md5)
continue
if (client.sending_assets.Find(asset_name))
if (!verify)
continue
sending += asset_name


unreceived[asset_name] = asset_md5

var/t = 0
var/timeout_time = DS2TICKS(ASSET_CACHE_SEND_TIMEOUT * client.sending_assets.len)

if (unreceived.len)
if (unreceived.len >= ASSET_CACHE_TELL_CLIENT_AMOUNT)
to_chat(client, "Sending Resources...")
var/job
if (verify)
job = ++client.last_asset_job
for(var/asset in unreceived)
if (SSassets.cache[asset])
log_asset("Sending asset [asset] to client [client]")
client << browse_rsc(SSassets.cache[asset], asset)
if(verify)
client.sending_assets[unreceived] = job

if(!verify)
client.sent_assets |= unreceived
addtimer(CALLBACK(client, /client/proc/asset_cache_update_json), 1 SECONDS, TIMER_UNIQUE|TIMER_OVERRIDE)
else

client.sending_assets |= unreceived

client << browse({"<script>window.location.href="?asset_cache_confirm_arrival=[job]"</script>"}, "window=asset_cache_browser&file=asset_cache_send_verify.htm")

while(client && !client.completed_asset_jobs.Find(job) && t < timeout_time) // Reception is handled in Topic()
stoplag(1) // Lock up the caller until this is received.
t++

if(client)
client.sending_assets -= unreceived
client.sent_assets = unreceived | client.sent_assets //if we sent an updated version of an asset, we would want to replace the md5 in the client's list of sent assets
client.completed_asset_jobs -= job
addtimer(CALLBACK(client, /client/proc/asset_cache_update_json), 1 SECONDS, TIMER_UNIQUE|TIMER_OVERRIDE)

. = TRUE

else if (sending.len) //else if because these things are ordered enough to trust that assets sent later on would have arrived after ones that were already in the queue.
for (var/sending_asset in sending)
var/sending_asset_jobid = client?.sending_assets[sending_asset]
if (!sending_asset_jobid)
continue

while(client && client.last_completed_asset_job < sending_asset_jobid && t < timeout_time) // Reception is handled in Topic()
stoplag(1) // Lock up the caller until this is received.
t++

. = TRUE

//This proc will download the files without clogging up the browse() queue, used for passively sending files on connection start.
//The proc calls procs that sleep for long times.
/proc/getFilesSlow(client/client, list/files, register_asset = TRUE)
for(var/file in files)
if (!client)
break
if (register_asset)
register_asset(file, files[file])

if (send_asset(client, file))
stoplag(0) //queuing calls like this too quickly can cause issues in some client versions

//This proc "registers" an asset, it adds it to the cache for further use, you cannot touch it from this point on or you'll fuck things up.
//if it's an icon or something be careful, you'll have to copy it before further use.
/proc/register_asset(asset_name, asset)
SSassets.cache[asset_name] = asset

//Generated names do not include file extention.
//Used mainly for code that deals with assets in a generic way
//The same asset will always lead to the same asset name
/proc/generate_asset_name(file)
return "asset.[md5(fcopy_rsc(file))]"

43 changes: 43 additions & 0 deletions code/modules/asset_cache/asset_cache_client.dm
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@

/client
var/list/sent_assets = list() // List of all asset filenames sent to this client by the asset cache, along with their assoicated md5s
var/list/completed_asset_jobs = list() /// List of all completed blocking send jobs awaiting acknowledgement by send_asset
var/list/sending_assets = list() /// List of all assets currently being sent in blocking mode
var/last_asset_job = 0 /// Last asset send job id.
var/last_completed_asset_job = 0

/// Process asset cache client topic calls for "asset_cache_confirm_arrival=[INT]"
/client/proc/asset_cache_confirm_arrival(job_id)
var/asset_cache_job = round(text2num(job_id))
//because we skip the limiter, we have to make sure this is a valid arrival and not somebody tricking us into letting them append to a list without limit.
if (asset_cache_job > 0 && asset_cache_job <= last_asset_job && !(asset_cache_job in completed_asset_jobs))
completed_asset_jobs += asset_cache_job
last_completed_asset_job = max(last_completed_asset_job, asset_cache_job)
else
return asset_cache_job || TRUE


/// Process asset cache client topic calls for "asset_cache_preload_data=[HTML+JSON_STRING]
/client/proc/asset_cache_preload_data(data)
/*var/jsonend = findtextEx(data, "{{{ENDJSONDATA}}}")
if (!jsonend)
CRASH("invalid asset_cache_preload_data, no jsonendmarker")*/
//var/json = html_decode(copytext(data, 1, jsonend))
var/json = data
var/list/preloaded_assets = json_decode(json)

for (var/preloaded_asset in preloaded_assets)
if (copytext(preloaded_asset, findlasttext(preloaded_asset, ".")+1) in list("js", "jsm", "htm", "html"))
preloaded_assets -= preloaded_asset
continue
sent_assets |= preloaded_assets

//Updates the client side stored html/json combo file used to keep track of what assets the client has between restarts/reconnects.

/client/proc/asset_cache_update_json(verify = FALSE, list/new_assets = list())
if (world.time - connection_time < 10 SECONDS) //don't override the existing data file on a new connection
return
if (!islist(new_assets))
new_assets = list("[new_assets]" = md5(SSassets.cache[new_assets]))

src << browse(json_encode(new_assets|sent_assets), "file=asset_data.json&display=0")
Loading

0 comments on commit 0b34e20

Please sign in to comment.