We have an API which makes heavy use of JSONP.
In context, that means the body of the response from our API must be
wrapped in a callback parameter that’s passed in the same request.
We currently store the JSON output from our API in Memcached, and on
subsequent requests, we jump into our application, pull the JSON output
out of Memcached, wrap the output in the callback parameter, and return
the response. This works OK, but we’d much rather utilize Nginx and its
Memcached module.
So, I was thinking it would be great if we could do the following. We
would store the JSON output from our API in Memcached, base on the URI.
On subsequent requests, the key would be found via the Memcached module,
and the output wrapped in the current requests callback parameter. This
way we wouldn’t need to jump into our application, and everything would
be handled via Nginx.
This would obviously require some customizations/extensions to the
Memcached module, and wanted to get some feedback to see if this
scenario was plausible, if there were better approaches, etc.
Any suggestions, etc, would be greatly appreciated,
Thanks!
Thank you for the great suggestion, and after I read up on the SSI
module, I think I understand the approach.
So, the actual content I store in the cache would be something like:
({"cat":
meow, “dog”: ruff})
On a subsequent request, that content would be retrieved, and replaced
with the response from the request (say, “blah123”) to http://example.com/jsonp_ssi/$args.
That way, I could parse out the callback value in that parallel request,
and the response body would end up something like:
blah123({“cat”: meow, “dog”: ruff})
I’d need to make some changes to my application and framework, but if
the above is correct, this will definitely work!
I may still keep reading Emillers module guide however, as if this was a
body filter, it would save that extra request (and it would be a fun
exercise ;)).
I’d need to make some changes to my application and framework, but if the above is correct, this will definitely work!
We could think a better approach out
I may still keep reading Emillers module guide however, as if this was a body filter, it would save that extra request (and it would be a fun exercise ;)).
Yep, nginx contains a lot of fun
By the way, there is a mailing list for developers: nginx-devel Info Page
Let me know what you think,
Thanks!
Dylan
Not exactly
AFAIU, you have a memcache full of key/data pairs like this:
john: {age: 20, gender: ‘male’}
ann: {age: 18, gender: ‘female’}
bill: {age: 31, gender: ‘male’}
igor: {age: 39, gender: ‘male’, genius: true}
And you want to load some value into your page by url like example.com/memc/john with a script tag (cross-domain technique, I
guess):
… so since the $memcached_key will match, Nginx will want to respond
with:
john: {age: 20, gender: ‘male’}
However, we really want to respond with the above value, but wrapped in
the callback value from the request, like:
ding2222222(john: {age: 20, gender: ‘male’)
… so since the $memcached_key will match, Nginx will want to respond with:
john: {age: 20, gender: ‘male’}
However, we really want to respond with the above value, but wrapped in the callback value from the request, like:
ding2222222(john: {age: 20, gender: ‘male’)
I’ll give that a whirl, and other combinations if that won’t work. If I
run into any brick walls, or get crafty and create that body filter,
I’ll definitely post back here with the outcome. Thank you very much
for your guidance thus far!
Hey agentzh! Oooo… I like the generalization of the XSS proposition,
and I’d love to hear
what ideas you have in store. I’m going to play with the SSI module
this week and see if it can solve
my problems, but if it can’t, I’m going to have to move forward with
that body filter.
Ping me off the thread (dylans (.at.) gmail.com), and we can chat!
Hey agentzh! Oooo… I like the generalization of the XSS proposition, and I’d love to hear
what ideas you have in store. I’m going to play with the SSI module this week and see if it can solve
my problems, but if it can’t, I’m going to have to move forward with that body filter.
Have a look at Agentz’s echo module too, specifically the echo_location
/ echo_location_async. This may (should) be quicker than running
subrequests through SSI, because the SSI would need to be parsed for
each request, but the logic would only need to be read once for the
echo_XXX functions. Unless you’re using if…else clauses in your SSI,
I think everything you can do with SSI can be done with the echo module.
Have a look at Agentz’s echo module too, specifically the echo_location /
echo_location_async. This may (should) be quicker than running subrequests
through SSI, because the SSI would need to be parsed for each request, but
the logic would only need to be read once for the echo_XXX functions.
Unless you’re using if…else clauses in your SSI, I think everything you
can do with SSI can be done with the echo module.
And you want to load some value into your page by url like example.com/memc/john with a script tag (cross-domain technique, I guess):
Oh oh! I’ve been thinking about implementing an ad-hoc module named
ngx_xss for this trick as well as some other more advanced for
cross-domain POST Glad to see SSI can achieve a similar goal to
this extend
dylanz: I wonder if we can work on ngx_xss together
This is probably going to work, but, need that “-n” feature that Marcus
suggested. Is this a valid feature request?
If so, I’d be happy to add it to the module. agentzh, let me know what
you think
That’s only if the data can be generated from Nginx variables, though,
which I was getting the impression it wasn’t.
I was thinking of something like:
location /blah {
echo “$arg_callback(”;
echo_location_async /subrequest
echo ")";
}
location /subrequest {
(various options)
}
(as you previously suggested)
since if the data is not related to the request, it’s probably better to
store it elsewhere.
Inside the subrequest block, you could have a memcached pass (if the
data is definitely going to be there), a try_files with memcached first
/ fastcgi/proxy to update the data in memcached (if it might not be
there), or a fastcgi/proxy pass, which utilizes Nginx’s cache (this may
be faster than using memcached to store the data).
Even better would be :
location /blah {
echo -n “$arg_callback(”;
echo_location_async /subrequest
echo -n ")";
}
Where the ‘-n’ works like the command-line version to not add a newline
(a new feature perhaps?).
I noticed that “echo” isn’t outputting on non-async events, for example:
echo “before”;
set $memcached_key $uri;
memcached_pass 127.0.0.1:11211;
echo “after”;
That results in the contents of the memcached_pass, but doesn’t include
the echo output.
I tried throwing in some echo_flush commands to see if that would help,
but it didn’t.
However, it does work if I use the after/before echo commands, for
example:
echo_before_body “before”;
set $memcached_key $uri;
memcached_pass 127.0.0.1:11211;
echo_after_body “after”;
I added the -n flag option to echo.c, and it works when I’m not doing
doing that proxy pass,
for example, this:
echo -n “hello”
echo -n “there”
echo “world”
… produces “hello there world”, all on one line.
Is that known behaviour? If so, I’ll work around it. Otherwise, if
it’s not, let me know and
I can see what I can do about fixing it
location /second {
set $memcached_key $uri;
memcached_pass 127.0.0.1;
}
The reason is to do with how the internals of Nginx work. Everything
that has echo_XXX should follow in order. If you try mixing echo_xxx
statements with other directives, you’ll likely get interesting
results.
I added the -n flag option to echo.c, and it works when I’m not doing doing that proxy pass,
for example, this:
Cool. If you haven’t done so already, it might be useful to add it to
any of the other echo_xxx directives that automatically add a newline
(e.g. echo_before, echo_after ?) - if it’s not too much trouble, of
course.